904 resultados para Extended Duamel Principle
Resumo:
Иван Димовски, Юлиан Цанков - Предложено е разширение на принципa на Дюамел. За намиране на явно решение на нелокални гранични задачи от този тип е развито операционно смятане основано върху некласическа двумерна конволюция. Пример от такъв тип е задачата на Бицадзе-Самарски.
Resumo:
Иван Димовски, Юлиан Цанков - В статията е намерено точно решение на задачата на Бицадзе-Самрски (1) за уравнението на Лаплас, като е използвано операционно смятане основано на некласическа двумернa конволюция. На това точно решение може да се гледа като начин за сумиране на нехармоничния ред по синуси на решението, получен по метода на Фурие.
Resumo:
The electrochemical properties of the film-covered anode/solution interface in the magnesium/ manganese dioxide dry cell have been evaluated. The most plausible electrical equivalent circuit description of the Mg/solution interface with the passive film intact, has been identified. These results are based on the analysis of ac impedance and voltage transient measurements made on the dry cell under conditions which cause no damage to the protective passive film on the anode. The study demonstrates the complementary character of impedance and transient measurements when widely different frequency ranges are sampled in each type of investigation. The values and temperature dependence of the anode-film resistance, film capacitance, double-layer capacitance and charge-transfer resistance of the film-covered magnesium/solution interface have been determined. The magnitude of these values and its implications in understanding the important performance aspects of the magnesium/manganese dioxide dry cell are discussed. The study may be extended, in principle, to Li, Al and Ca batteries.
Resumo:
The electrochemical properties of the film-covered anode/solution interface in the magnesium/ manganese dioxide dry cell have been evaluated. The most plausible electrical equivalent circuit description of the Mg/solution interface with the passive film intact, has been identified. These results are based on the analysis of ac impedance and voltage transient measurements made on the dry cell under conditions which cause no damage to the protective passive film on the anode. The study demonstrates the complementary character of impedance and transient measurements when widely different frequency ranges are sampled in each type of investigation. The values and temperature dependence of the anode-film resistance, film capacitance, double-layer capacitance and charge-transfer resistance of the film-covered magnesium/solution interface have been determined. The magnitude of these values and its implications in understanding the important performance aspects of the magnesium/manganese dioxide dry cell are discussed. The study may be extended, in principle, to Li, Al and Ca batteries.
Resumo:
MSC 2010: 44A35, 35L20, 35J05, 35J25
Resumo:
In this paper, we mainly deal with cigenvalue problems of non-self-adjoint operator. To begin with, the generalized Rayleigh variational principle, the idea of which was due to Morse and Feshbach, is examined in detail and proved more strictly in mathematics. Then, other three equivalent formulations of it are presented. While applying them to approximate calculation we find the condition under which the above variational method can be identified as the same with Galerkin's one. After that we illustrate the generalized variational principle by considering the hydrodynamic stability of plane Poiseuille flow and Bénard convection. Finally, the Rayleigh quotient method is extended to the cases of non-self-adjoint matrix in order to determine its strong eigenvalne in linear algebra.
Resumo:
Transmitter-based equalization is investigated for enhanced performance in 10 Gb/s multimodefiber links. Rigorous simulations and proof-of-principle experiments over 500 m of FDDI-grade fiber confirm for the first time the potential superiority of the technique relative to receiver-based schemes. © 2007 Optical Society of America.
Resumo:
People with disabilities such as quadriplegia can use mouth-sticks and head-sticks as extension devices to perform desired manipulations. These extensions provide extended proprioception which allows users to directly feel forces and other perceptual cues such as texture present at the tip of the mouth-stick. Such devices are effective for two principle reasons: because of their close contact with the user's tactile and proprioceptive sensing abilities; and because they tend to be lightweight and very stiff, and can thus convey tactile and kinesthetic information with high-bandwidth. Unfortunately, traditional mouth-sticks and head-sticks are limited in workspace and in the mechanical power that can be transferred because of user mobility and strength limitations. We describe an alternative implementation of the head-stick device using the idea of a virtual head-stick: a head-controlled bilateral force-reflecting telerobot. In this system the end-effector of the slave robot moves as if it were at the tip of an imaginary extension of the user's head. The design goal is for the system is to have the same intuitive operation and extended proprioception as a regular mouth-stick effector but with augmentation of workspace volume and mechanical power. The input is through a specially modified six DOF master robot (a PerForceTM hand-controller) whose joints can be back-driven to apply forces at the user's head. The manipulation tasks in the environment are performed by a six degree-of-freedom slave robot (the Zebra-ZEROTM) with a built-in force sensor. We describe the prototype hardware/software implementation of the system, control system design, safety/disability issues, and initial evaluation tasks.
Resumo:
A precise fomulation of the strong Equivalence Principle is essential to the understanding of the relationship between gravitation and quantum mechanics. The relevant aspects are reviewed in a context including General Relativity but allowing for the presence of torsion. For the sake of brevity, a concise statement is proposed for the Principle: An ideal observer immersed in a gravitational field can choose a reference frame in which gravitation goes unnoticed. This statement is given a clear mathematical meaning through an accurate discussion of its terms. It holds for ideal observers (time-like smooth non-intersecting curves), but not for real, spatially extended observers. Analogous results hold for gauge fields. The difference between gravitation and the other fundamental interactions comes from their distinct roles in the equation of force.
Resumo:
We show that an anomaly-free description of matter in (1+1) dimensions requires a deformation of the 2D relativity principle, which introduces a non-trivial centre in the 2D Poincare algebra. Then we work out the reduced phase space of the anomaly-free 2D relativistic particle, in order to show that it lives in a noncommutative 2D Minkowski space. Moreover, we build a Gaussian wave packet to show that a Planck length is well defined in two dimensions. In order to provide a gravitational interpretation for this noncommutativity, we propose to extend the usual 2D generalized dilaton gravity models by a specific Maxwell component, which guages the extra symmetry associated with the centre of the 2D Poincare algebra. In addition, we show that this extension is a high energy correction to the unextended dilaton theories that can affect the topology of spacetime. Further, we couple a test particle to the general extended dilaton models with the purpose of showing that they predict a noncommutativity in curved spacetime, which is locally described by a Moyal star product in the low energy limit. We also conjecture a probable generalization of this result, which provides strong evidence that the noncommutativity is described by a certain star product which is not of the Moyal type at high energies. Finally, we prove that the extended dilaton theories can be formulated as Poisson-Sigma models based on a nonlinear deformation of the extended Poincare algebra.
Resumo:
In order to eliminate the de Gennes packing problem, which usually limits the attainable size of dendrimers, a new branching unit containing para-tetraphenylene ethynyl arms has been synthesized and utilized in the preparation of dendrimers of the Müllen type. The divergent principle of synthesis, based on the Dilthey reaction, could be carried up to sixth generation which contains 2776 benzene rings and possesses a diameter in the 27 nm range ("exploded dendrimer"). Monodispersity and dimensions of this and the lower generation species have been studied by MALDI-TOF MS (including the very recent superconducting tunnel junction detector), by size-exclusion chromatography, dynamic light scattering, transmission electron microscopy, and atomic force microscopy. Interesting features, apart from the huge dimension, are the low density and high porosity of these giant molecules which cause extensive aggregation in the gas phase, flattening on solid support (AFM) and the ready incorporation of guest molecules in the condensed phase. Since the synthesis of the para-tetraphenylene arms is quite elaborate, similar dendrimers containing para-terphenylene arms have been prepared; they are accessible more economically ("semi-exploded dendrimers"). It has been shown that they in several aspects mimic the features of the "exploded dendrimers". In order to take advantage of the presence of large internal cavities in this dendrimer type, dendrons containing -C≡C- triple bonds have also been incorporated. Surprisingly, they are readily hydrogenated under the condition of heterogeneous catalysis (Pd/C) which demonstrates the large size of the cavities. As revealed by a quartz microbalance study the post-hydrogenation dendrimers are less prone to incorporate guest molecules than before hydrogenation. Obviously, the more flexible nature of the former reduces porosity, it also leads to significant shrinkage. An interesting perspective is the use of homogeneous hydrogenation catalysts of variable size with the aim of determining the dimension of internal free space.
Resumo:
This paper reinforces the argument of Harding and Sirmans (2002) that the observed preference of lenders for extended maturity rather than renegotiation of the principle in the case of loan default is due to the superior incentive properties of the former. Specifically, borrowers have a greater incentive to avoid default under extended maturity because it reduces the likelihood that they will be able to escape paying off the full loan balance. Thus, although extended maturity leaves open the possibility of foreclosure, it will be preferred to renegotiation as long as the dead weight loss from foreclosure is not too large.
Resumo:
The analysis and prediction of the dynamic behaviour of s7ructural components plays an important role in modern engineering design. :n this work, the so-called "mixed" finite element models based on Reissnen's variational principle are applied to the solution of free and forced vibration problems, for beam and :late structures. The mixed beam models are obtained by using elements of various shape functions ranging from simple linear to complex cubic and quadratic functions. The elements were in general capable of predicting the natural frequencies and dynamic responses with good accuracy. An isoparametric quadrilateral element with 8-nodes was developed for application to thin plate problems. The element has 32 degrees of freedom (one deflection, two bending and one twisting moment per node) which is suitable for discretization of plates with arbitrary geometry. A linear isoparametric element and two non-conforming displacement elements (4-node and 8-node quadrilateral) were extended to the solution of dynamic problems. An auto-mesh generation program was used to facilitate the preparation of input data required by the 8-node quadrilateral elements of mixed and displacement type. Numerical examples were solved using both the mixed beam and plate elements for predicting a structure's natural frequencies and dynamic response to a variety of forcing functions. The solutions were compared with the available analytical and displacement model solutions. The mixed elements developed have been found to have significant advantages over the conventional displacement elements in the solution of plate type problems. A dramatic saving in computational time is possible without any loss in solution accuracy. With beam type problems, there appears to be no significant advantages in using mixed models.
Resumo:
Numerous works have been conducted on modelling basic compliant elements such as wire beams, and closed-form analytical models of most basic compliant elements have been well developed. However, the modelling of complex compliant mechanisms is still a challenging work. This paper proposes a constraint-force-based (CFB) modelling approach to model compliant mechanisms with a particular emphasis on modelling complex compliant mechanisms. The proposed CFB modelling approach can be regarded as an improved free-body- diagram (FBD) based modelling approach, and can be extended to a development of the screw-theory-based design approach. A compliant mechanism can be decomposed into rigid stages and compliant modules. A compliant module can offer elastic forces due to its deformation. Such elastic forces are regarded as variable constraint forces in the CFB modelling approach. Additionally, the CFB modelling approach defines external forces applied on a compliant mechanism as constant constraint forces. If a compliant mechanism is at static equilibrium, all the rigid stages are also at static equilibrium under the influence of the variable and constant constraint forces. Therefore, the constraint force equilibrium equations for all the rigid stages can be obtained, and the analytical model of the compliant mechanism can be derived based on the constraint force equilibrium equations. The CFB modelling approach can model a compliant mechanism linearly and nonlinearly, can obtain displacements of any points of the rigid stages, and allows external forces to be exerted on any positions of the rigid stages. Compared with the FBD based modelling approach, the CFB modelling approach does not need to identify the possible deformed configuration of a complex compliant mechanism to obtain the geometric compatibility conditions and the force equilibrium equations. Additionally, the mathematical expressions in the CFB approach have an easily understood physical meaning. Using the CFB modelling approach, the variable constraint forces of three compliant modules, a wire beam, a four-beam compliant module and an eight-beam compliant module, have been derived in this paper. Based on these variable constraint forces, the linear and non-linear models of a decoupled XYZ compliant parallel mechanism are derived, and verified by FEA simulations and experimental tests.
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.