413 resultados para PROOFS
Resumo:
The complex domain structure in ferroelectrics gives rise to electromechanical coupling, and its evolution (via domain switching) results in a time-dependent (i.e. viscoelastic) response. Although ferroelectrics are used in many technological applications, most do not attempt to exploit the viscoelastic response of ferroelectrics, mainly due to a lack of understanding and accurate models for their description and prediction. Thus, the aim of this thesis research is to gain better understanding of the influence of domain evolution in ferroelectrics on their dynamic mechanical response. There have been few studies on the viscoelastic properties of ferroelectrics, mainly due to a lack of experimental methods. Therefore, an apparatus and method called Broadband Electromechanical Spectroscopy (BES) was designed and built. BES allows for the simultaneous application of dynamic mechanical and electrical loading in a vacuum environment. Using BES, the dynamic stiffness and loss tangent in bending and torsion of a particular ferroelectric, viz. lead zirconate titanate (PZT), was characterized for different combinations of electrical and mechanical loading frequencies throughout the entire electric displacement hysteresis. Experimental results showed significant increases in loss tangent (by nearly an order of magnitude) and compliance during domain switching, which shows promise as a new approach to structural damping. A continuum model of the viscoelasticity of ferroelectrics was developed, which incorporates microstructural evolution via internal variables and associated kinetic relations. For the first time, through a new linearization process, the incremental dynamic stiffness and loss tangent of materials were computed throughout the entire electric displacement hysteresis for different combinations of mechanical and electrical loading frequencies. The model accurately captured experimental results. Using the understanding gained from the characterization and modeling of PZT, two applications of domain switching kinetics were explored by using Micro Fiber Composites (MFCs). Proofs of concept of set-and-hold actuation and structural damping using MFCs were demonstrated.
Resumo:
The resolution of the so-called thermodynamic paradox is presented in this paper. It is shown, in direct contradiction to the results of several previously published papers, that the cutoff modes (evanescent modes having complex propagation constants) can carry power in a waveguide containing ferrite. The errors in all previous “proofs” which purport to show that the cutoff modes cannot carry power are uncovered. The boundary value problem underlying the paradox is studied in detail; it is shown that, although the solution is somewhat complicated, there is nothing paradoxical about it.
The general problem of electromagnetic wave propagation through rectangular guides filled inhomogeneously in cross-section with transversely magnetized ferrite is also studied. Application of the standard waveguide techniques reduces the TM part to the well-known self-adjoint Sturm Liouville eigenvalue equation. The TE part, however, leads in general to a non-self-adjoint eigenvalue equation. This equation and the associated expansion problem are studied in detail. Expansion coefficients and actual fields are determined for a particular problem.
Resumo:
In this work I present recent scientific papers related to the concept of tree-depth: different characterizations, a game theoretic approach to it and recently discovered applications. The focus in this work is presenting all the ideas in a self-contained way, such that they can be easily understood with little previous knowledge. Apart from that all the ideas are presented in a homogeneous way with clear examples and all the lemmas, some of which didn’t have proofs in the papers, are presented with rigorous proofs.
Resumo:
Nosso objetivo neste trabalho é investigar como o design gráfico se relaciona com os meios de produção da indústria gráfica brasileira. Não é comum encontrar referências abordando temas que relacionam design e tecnologia em livros sobre a história do Brasil, o que demonstra a urgência do resgate histórico de indústrias que implementaram novas tecnologias para atender a demandas projetuais e mercadológicas. Sediada na cidade do Rio de Janeiro, a clicheria Latt-Mayer como era comumente chamada, era sinônimo de qualidade e tecnologia. Entre seus clientes podemos apontar agências de publicidade, editoras, escritórios de designers e gráficas de todo país. Esta pesquisa se baseia em um corpus composto por imagens, documentos, manuais técnicos, reportagens, matrizes e depoimentos com pessoas que estiveram diretamente envolvidas com essa empresa. Catalogamos maquinaria e técnicas utilizadas, além de projetos gráficos relevantes que tiveram matrizes e provas produzidas na empresa. Por meio deste trabalho, é possível visualizar um abrangente painel histórico, que contempla um período em que ocorreram significativas transformações na relação entre designers e tecnologia na indústria gráfica brasileira.
Resumo:
The paper considers the feedback stabilization of periodic orbits in a planar juggler. The juggler is "blind," i.e, he has no other sensing capabilities than the detection of impact times. The robustness analysis of the proposed control suggests that the arms acceleration at impact is a crucial design parameter even though it plays no role in the stability analysis. Analytical results and convergence proofs are provided for a simplified model of the juggler. The control law is then adapted to a more accurate model and validated in an experimental setup. © 2007 IEEE.
Resumo:
Using the nonlinear analog of the Fake Riccati equation developed for linear systems, we derive an inverse optimality result for several receding-horizon control schemes. This inverse optimality result unifies stability proofs and shows that receding-horizon control possesses the stability margins of optimal control laws. © 1997 Elsevier Science B.V.
Resumo:
Mathematical theorems in control theory are only of interest in so far as their assumptions relate to practical situations. The space of systems with transfer functions in ℋ∞, for example, has many advantages mathematically, but includes large classes of non-physical systems, and one must be careful in drawing inferences from results in that setting. Similarly, the graph topology has long been known to be the weakest, or coarsest, topology in which (1) feedback stability is a robust property (i.e. preserved in small neighbourhoods) and (2) the map from open-to-closed-loop transfer functions is continuous. However, it is not known whether continuity is a necessary part of this statement, or only required for the existing proofs. It is entirely possible that the answer depends on the underlying classes of systems used. The class of systems we concern ourselves with here is the set of systems that can be approximated, in the graph topology, by real rational transfer function matrices. That is, lumped parameter models, or those distributed systems for which it makes sense to use finite element methods. This is precisely the set of systems that have continuous frequency responses in the extended complex plane. For this class, we show that there is indeed a weaker topology; in which feedback stability is robust but for which the maps from open-to-closed-loop transfer functions are not necessarily continuous. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
数字作品的所有权证明允许在不泄漏任何秘密信息和防止所有者欺骗的前提下,对版权声明进行验证,提出一种基于Proactive可验证秘密共享和安全多方计算的数字作品所有权证明方案.在该方案中,可验证秘密共享,保证了所有权秘密的正确性,并防止对协议参与者的欺骗逼过Proactive安全提供自动恢复功能来保证协议生存周期内秘密的完整性和安全性.使用安全多方计算和同态承诺的零知识证明,实现了所有权验证,在不假设可信方存在的前提下,所提出方案能够在没有太多成员合谋的情况下,完成有效计算并发现不忠实成员.
Resumo:
Laih提出了指定验证方的签名方案设计问题,并给出一种解决方案.首先分析指出该方案存在严重安全缺陷,然后提出了签名方案SV-EDL,解决了如上密码学问题.同时,把可证明安全理论引入这类方案的分析设计,并在RO(random oracle)模型中证明:SV-EDL的抗伪造安全性和计算Diffie-Hellman(computational Diffie-HeUman,简称CDH)问题紧密关联,亦即伪造SV-EDL签名几乎和解决CDH问题一样困难;除指定方以外,任何人验证签名的能力都与决策Difile-Hellman(decisional Diffie-Hellman,简称DDH)问题密切相关。由于CDH问题和DDH问题的困难性与离散对数(discrete logarithm,简称DL)问题紧密相关已成为广泛共识,因此与当前同类方案比较,该签名方案提供了更好的安全性保证.此外,上述签名方案还以非常简明、直接的方式满足不可否认要求最后提出并构造了验证服务器系统的门限验证协议,并在标准模型中给出了安全性证明.该方案不要求可信中心的存在.
Resumo:
利用趋势面分析法 ,对黄土高原土壤中 3 9个元素含量的区域分布进行了研究。结果表明 ,黄土高原土壤中的大多数元素含量和黄土母质接近 ,且具有粒度相关型的地域分异规律。土壤发生过程和生物气候环境是影响土壤元素含量地域分异的主要因素。元素的地域分布为黄土高原的风成学说提供了土壤地球化学方面的佐证
Resumo:
针对多对一供应链结构中零售商具有较强议价能力的特点,建立了零售商为主方、制造商为从方的Stackelberg主从对策模型;给出在零售商提供契约条款的对称博弈中,制造商生产产品策略存在唯一最优解的证明;分析了零售商契约参数变量的决策问题;讨论了收入共享契约下分散供应链同集中供应链决策的关系.通过仿真实验,分析验证了契约参数及产品的可替代性对供应链绩效的影响。
Resumo:
Since the middle of 1980's, the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge has been highly concerned. The dominant theory is theory of common element (Singley & Anderson, 1989) which predict that there will be little or no transfer between subskills within the same domain when knowledge is used in different ways, even though the subskills might rest on a common body of declarative knowledge. This idea is termed as "principle of use specificity of knowledge" (Anderson, 1987). Although this principle has gained some empirical evidence from different domains such as elementary geometry (Neves & Anderson, 1981) and computer programming (McKendree & Anderson, 1987), it is challenged by some research (Pennington et al., 1991; 1995) in which substantially larger amounts of transfer of training was found between substills that rest on a shared declarative knowledge but share little procedures (production rules). Pennington et al. (1995) provided evidence that this larger amounts of transfer are due to the elaboration of declarative knowledge. Our research provide a test of these two different explanation, by considering transfer between two subskills within the domain of elementary geometry and elementary algebra respectively, and the inference of learning method ("learning from examples" and "learning from declarative-text") and subject ability (high, middle, low) on the amounts of transfer. Within the domain of elementary geometry, the two subskills of generating proofs" (GP) and "explaining proofs" (EP) which are rest on the declarative knowledge of "theorems on the characters of parallelogram" share little procedures. Within the domain of elementary algebra, the two subskills of "calculation" (C) and "simplification" (S) which are rest on the declarative knowledge of "multiplication of radical" share some more procedures. The results demonstrate that: 1. Within the domain of elementary geometry, although little transfer was found between the two subskills of GP and EP within the total subjects, different results occurred when considering the factor of subject's ability. Within the high level subjects, significant positive transfer was found from EP to GP, while little transfer was found on the opposite direction (i. e. from GP to EP). Within the low level subjects, significant positive transfer was found from EP to GP, while significant negative transfer was found on the opposite direction. For the middle level subject, little transfer was found between the two subskills. 2. Within the domain of elementary algebra, significant positive transfer was found from S to C, while significant negative transfer was found on the opposite direction (i. e. from C to S), when considering the total subjects. The same pattern of transfer occurred within the middle level subjects and low level subject. Within the high level subjects, no transfer was found between the two subskills. 3. Within theses two domains, different learning methods yield little influence on transfer of training between subskills. Apparently, these results can not be attributed to either common procedures or elaboration of declarative knowledge. A kind of synthetic inspection is essential to construct a reasonable explanation of these results which should take into account the following three elements: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge. 排Excluding the factor of subject, transfer of training between subskills can be predicted and explained by analyzing the relations between the procedures of two subskills. However, when considering some certain subjects, the explanation of transfer of training between subskills must include subjects' elaboration of declarative knowledge and procedural knowledge, especially the influence of the elaboration on performing the other subskill. The fact that different learning methods yield little influence on transfer of training between subskills can be explained by the fact that these two methods did not effect the level of declarative knowledge. Protocol analysis provided evidence to support these hypothesis. From this research, we conclude that in order to expound the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge, three elements must be considered synthetically which include: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge.
Resumo:
This paper introduces Denotational Proof Languages (DPLs). DPLs are languages for presenting, discovering, and checking formal proofs. In particular, in this paper we discus type-alpha DPLs---a simple class of DPLs for which termination is guaranteed and proof checking can be performed in time linear in the size of the proof. Type-alpha DPLs allow for lucid proof presentation and for efficient proof checking, but not for proof search. Type-omega DPLs allow for search as well as simple presentation and checking, but termination is no longer guaranteed and proof checking may diverge. We do not study type-omega DPLs here. We start by listing some common characteristics of DPLs. We then illustrate with a particularly simple example: a toy type-alpha DPL called PAR, for deducing parities. We present the abstract syntax of PAR, followed by two different kinds of formal semantics: evaluation and denotational. We then relate the two semantics and show how proof checking becomes tantamount to evaluation. We proceed to develop the proof theory of PAR, formulating and studying certain key notions such as observational equivalence that pervade all DPLs. We then present NDL, a type-alpha DPL for classical zero-order natural deduction. Our presentation of NDL mirrors that of PAR, showing how every basic concept that was introduced in PAR resurfaces in NDL. We present sample proofs of several well-known tautologies of propositional logic that demonstrate our thesis that DPL proofs are readable, writable, and concise. Next we contrast DPLs to typed logics based on the Curry-Howard isomorphism, and discuss the distinction between pure and augmented DPLs. Finally we consider the issue of implementing DPLs, presenting an implementation of PAR in SML and one in Athena, and end with some concluding remarks.
Resumo:
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.
Resumo:
This paper presents an algorithm for simplifying NDL deductions. An array of simplifying transformations are rigorously defined. They are shown to be terminating, and to respect the formal semantis of the language. We also show that the transformations never increase the size or complexity of a deduction---in the worst case, they produce deductions of the same size and complexity as the original. We present several examples of proofs containing various types of "detours", and explain how our procedure eliminates them, resulting in smaller and cleaner deductions. All of the given transformations are fully implemented in SML-NJ. The complete code listing is presented, along with explanatory comments. Finally, although the transformations given here are defined for NDL, we point out that they can be applied to any type-alpha DPL that satisfies a few simple conditions.