944 resultados para refinement calculus


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a solution concept for abstract systems called the admissible hierarchic set. The solution we propose is a refinement of the hierarchic solution, a generalization of the von Neumann and Morgenstern solution. For finite abstract systems we show that the admissible hierarchic sets and the von Neumann and Morgenstern stable sets are the only outcomes of a coalition formation procedure (Wilson, 1972 and Roth, 1984). For coalitional games we prove that the core is either a vN&M stable set or an admissible hierarchic set.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

高能气动喷丸是通过剧烈塑性变形产生细晶材料的方法之一.用纯铜作为实验材料在高能气动喷丸实验机上作喷丸实验,利用表面形貌仪、显微硬度仪以及电子背散射衍射方法对经喷丸处理的试样进行表面粗糙度、硬度及晶粒尺度的测量.结果表明,表面粗糙度随处理时间的增长呈指数衰减趋势;晶粒尺度细化了一个量级,由初始的25.48μm细化至2.52μm;且显微硬度由原来的0.58GPa增加到1.25GPa.量纲分析得到喷丸过程的5个无量纲参数,利用有限元模拟研究了无量纲参数的影响.自编制程序产生丸粒的随机碰撞并调用Ansys/ls-dyna模拟碰撞过程,并将模拟1000次碰撞的结果与相应的实验值进行了比较.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

设计并加工了在材料试验机上使用的高压扭转夹持装置,并用装置获得了不同扭转变形量的纯铜试样.使用电子背散射衍射技术测量了处理后试样品粒尺寸沿径向的分布.对比分析了晶粒细化程度和应变量的关系,提出了一个描述晶粒细化的简单剪切球模型.结果表明,晶粒细化程度随着应变量的增大而增强,平均晶粒尺寸为原始平均尺寸的1/50.当剪切应变小于10对,晶粒尺寸随着剪切应变的增大迅速变小;当剪切应变大于10对,晶粒尺寸的变化趋于缓慢,晶粒细化程度与Stuwe等效应变之间有幂函数关系.依据简单剪切球形晶粒的模型,得到了晶粒细化程度随剪切应变增大而增强的规律,并与实验数据的变化趋势一致。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

固相法合成具有橄榄石型结构的LiFePO_4晶体,合成温度分别为670、700、730℃.采用XRD结构精修对合成LiFePO_4的结构进行了研究.研究发现随着合成温度的变化,晶胞参数a、b和c发生变化,晶胞参数的变化是等比例的增加或减少.由于合成温度的变化,Fe-O八面体中Fe-O键长发生变化,Fe-O键长的变化将会使得Fe的3 d轨道能量发生变化.相对于合成温度670、730℃,在700℃合成的LiFePO_4晶体具有最大的锂离子扩散有效面积.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various families of exact solutions to the Einstein and Einstein-Maxwell field equations of General Relativity are treated for situations of sufficient symmetry that only two independent variables arise. The mathematical problem then reduces to consideration of sets of two coupled nonlinear differential equations.

The physical situations in which such equations arise include: a) the external gravitational field of an axisymmetric, uncharged steadily rotating body, b) cylindrical gravitational waves with two degrees of freedom, c) colliding plane gravitational waves, d) the external gravitational and electromagnetic fields of a static, charged axisymmetric body, and e) colliding plane electromagnetic and gravitational waves. Through the introduction of suitable potentials and coordinate transformations, a formalism is presented which treats all these problems simultaneously. These transformations and potentials may be used to generate new solutions to the Einstein-Maxwell equations from solutions to the vacuum Einstein equations, and vice-versa.

The calculus of differential forms is used as a tool for generation of similarity solutions and generalized similarity solutions. It is further used to find the invariance group of the equations; this in turn leads to various finite transformations that give new, physically distinct solutions from old. Some of the above results are then generalized to the case of three independent variables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce---for the first time---general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.

The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.

From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.

In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.

N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches.

This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems.

Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired lattice constant. The film is grown strained on an available wafer substrate, but the thickness is below the dislocation nucleation threshold. By removing the film from the growth substrate, allowing the strain to relax elastically, and bonding it to a supportive handle, a template with the desired lattice constant is formed. Experimental efforts towards this structure and initial proof of concept are presented.

Cells with high radiative quality present the opportunity to recover a large amount of their radiative losses if they are incorporated in an ensemble that couples emission from one cell to another. This effect is well known, but has been explored previously in the context of sub cells that independently operate at their maximum power point. This analysis explicitly accounts for the system interaction and identifies ways to enhance overall performance by operating some cells in an ensemble at voltages that reduce the power converted in the individual cell. Series connected multijunctions, which by their nature facilitate strong optical coupling between sub-cells, are reoptimized with substantial performance benefit.

Photovoltaic efficiency is usually measured relative to a standard incident spectrum to allow comparison between systems. Deployed in the field systems may differ in energy production due to sensitivity to changes in the spectrum. The series connection constraint in particular causes system efficiency to decrease as the incident spectrum deviates from the standard spectral composition. This thesis performs a case study comparing performance of systems over a year at a particular location to identify the energy production penalty caused by series connection relative to independent electrical connection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.

Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.

From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.

In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nesta dissertação é apresentada uma modelagem analítica para o processo evolucionário formulado pela Teoria da Evolução por Endossimbiose representado através de uma sucessão de estágios envolvendo diferentes interações ecológicas e metábolicas entre populações de bactérias considerando tanto a dinâmica populacional como os processos produtivos dessas populações. Para tal abordagem é feito uso do sistema de equações diferenciais conhecido como sistema de Volterra-Hamilton bem como de determinados conceitos geométricos envolvendo a Teoria KCC e a Geometria Projetiva. Os principais cálculos foram realizados pelo pacote de programação algébrica FINSLER, aplicado sobre o MAPLE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As pontes rodoviárias de concreto armado estão sujeitas às ações dinâmicas variáveis devido ao tráfego de veículos sobre o tabuleiro. Estas ações dinâmicas podem gerar o surgimento das fraturas ou mesmo a sua propagação na estrutura. A correta consideração destes aspectos objetivou o desenvolvimento de um estudo, de forma a avaliar os esforços do tráfego de veículos pesados sobre o tabuleiro. As técnicas para a contagem de ciclos de esforços e a aplicação das regras de dano acumulado foram analisadas através das curvas S-N de diversas normas estudadas. A ponte rodoviária investigada é constituída por quatro vigas longitudinais, três transversinas e por um tabuleiro de concreto armado. O modelo computacional, desenvolvido para a análise dinâmica da ponte, foi concebido com base no emprego de técnicas usuais de discretização através do método dos elementos finitos. O modelo estrutural da obra de arte rodoviária estudada foi simulado com base no emprego de elementos finitos sólidos tridimensionais. Os veículos são representados a partir de sistemas massa-mola-amortecedor. O tráfego dessas viaturas é considerado mediante a simulação de comboios semi-infinitos, deslocando-se com velocidade constante sobre o tabuleiro da ponte. As conclusões deste trabalho versam acerca da vida útil de serviço dos elementos estruturais de pontes rodoviárias de concreto armado submetidas às ações dinâmicas provenientes do tráfego de veículos pesados sobre o tabuleiro.