953 resultados para Generalized Resolvent Operator


Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the first time, we introduce a generalized form of the exponentiated generalized gamma distribution [Cordeiro et al. The exponentiated generalized gamma distribution with application to lifetime data, J. Statist. Comput. Simul. 81 (2011), pp. 827-842.] that is the baseline for the log-exponentiated generalized gamma regression model. The new distribution can accommodate increasing, decreasing, bathtub- and unimodal-shaped hazard functions. A second advantage is that it includes classical distributions reported in the lifetime literature as special cases. We obtain explicit expressions for the moments of the baseline distribution of the new regression model. The proposed model can be applied to censored data since it includes as sub-models several widely known regression models. It therefore can be used more effectively in the analysis of survival data. We obtain maximum likelihood estimates for the model parameters by considering censored data. We show that our extended regression model is very useful by means of two applications to real data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main feature of partition of unity methods such as the generalized or extended finite element method is their ability of utilizing a priori knowledge about the solution of a problem in the form of enrichment functions. However, analytical derivation of enrichment functions with good approximation properties is mostly limited to two-dimensional linear problems. This paper presents a procedure to numerically generate proper enrichment functions for three-dimensional problems with confined plasticity where plastic evolution is gradual. This procedure involves the solution of boundary value problems around local regions exhibiting nonlinear behavior and the enrichment of the global solution space with the local solutions through the partition of unity method framework. This approach can produce accurate nonlinear solutions with a reduced computational cost compared to standard finite element methods since computationally intensive nonlinear iterations can be performed on coarse global meshes after the creation of enrichment functions properly describing localized nonlinear behavior. Several three-dimensional nonlinear problems based on the rate-independent J (2) plasticity theory with isotropic hardening are solved using the proposed procedure to demonstrate its robustness, accuracy and computational efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter q to the inverse temperature beta. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for q = 1, which corresponds to the standard Metropolis algorithm. Nonlocality implies very time-consuming computer calculations, since the energy of the whole system must be reevaluated when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for the Ising model. By using short-time nonequilibrium numerical simulations, we also calculate for this model the critical temperature and the static and dynamical critical exponents as functions of q. Even for q not equal 1, we show that suitable time-evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results when we use nonlocal dynamics, showing that short-time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law, considering in a log-log plot two successive refinements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-commutative geometry indicates a deformation of the energy-momentum dispersion relation f (E) = E/pc (not equal 1) for massless particles. This distorted energy-momentum relation can affect the radiation-dominated phase of the universe at sufficiently high temperature. This prompted the idea of non-commutative inflation by Alexander et al (2003 Phys. Rev. D 67 081301) and Koh and Brandenberger (2007 JCAP06(2007) 021 and JCAP11(2007) 013). These authors studied a one-parameter family of a non-relativistic dispersion relation that leads to inflation: the a family of curves f (E) = 1 + (lambda E)(alpha). We show here how the conceptually different structure of symmetries of non-commutative spaces can lead, in a mathematically consistent way, to the fundamental equations of non-commutative inflation driven by radiation. We describe how this structure can be considered independently of (but including) the idea of non-commutative spaces as a starting point of the general inflationary deformation of SL(2, C). We analyze the conditions on the dispersion relation that leads to inflation as a set of inequalities which plays the same role as the slow-roll conditions on the potential of a scalar field. We study conditions for a possible numerical approach to obtain a general one-parameter family of dispersion relations that lead to successful inflation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rigorous asymptotic theory for Wald residuals in generalized linear models is not yet available. The authors provide matrix formulae of order O(n(-1)), where n is the sample size, for the first two moments of these residuals. The formulae can be applied to many regression models widely used in practice. The authors suggest adjusted Wald residuals to these models with approximately zero mean and unit variance. The expressions were used to analyze a real dataset. Some simulation results indicate that the adjusted Wald residuals are better approximated by the standard normal distribution than the Wald residuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A twisted generalized Weyl algebra A of degree n depends on a. base algebra R, n commuting automorphisms sigma(i) of R, n central elements t(i) of R and on some additional scalar parameters. In a paper by Mazorchuk and Turowska, it is claimed that certain consistency conditions for sigma(i) and t(i) are sufficient for the algebra to be nontrivial. However, in this paper we give all example which shows that this is false. We also correct the statement by finding a new set of consistency conditions and prove that the old and new conditions together are necessary and sufficient for the base algebra R to map injectively into A. In particular they are sufficient for the algebra A to be nontrivial. We speculate that these consistency relations may play a role in other areas of mathematics, analogous to the role played by the Yang-Baxter equation in the theory of integrable systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generalized finite element method (GFEM) is applied to a nonconventional hybrid-mixed stress formulation (HMSF) for plane analysis. In the HMSF, three approximation fields are involved: stresses and displacements in the domain and displacement fields on the static boundary. The GFEM-HMSF shape functions are then generated by the product of a partition of unity associated to each field and the polynomials enrichment functions. In principle, the enrichment can be conducted independently over each of the HMSF approximation fields. However, stability and convergence features of the resulting numerical method can be affected mainly by spurious modes generated when enrichment is arbitrarily applied to the displacement fields. With the aim to efficiently explore the enrichment possibilities, an extension to GFEM-HMSF of the conventional Zienkiewicz-Patch-Test is proposed as a necessary condition to ensure numerical stability. Finally, once the extended Patch-Test is satisfied, some numerical analyses focusing on the selective enrichment over distorted meshes formed by bilinear quadrilateral finite elements are presented, thus showing the performance of the GFEM-HMSF combination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Thalamotomies and pallidotomies were commonly performed before the deep brain stimulation (DBS) era. Although ablative procedures can lead to significant dystonia improvement, longer periods of analysis reveal disease progression and functional deterioration. Today, the same patients seek additional treatment possibilities. Methods: Four patients with generalized dystonia who previously had undergone bilateral pallidotomy came to our service seeking additional treatment because of dystonic symptom progression. Bilateral subthalamic nucleus DBS (B-STN-DBS) was the treatment of choice. The patients were evaluated with the BurkeFahnMarsden Dystonia Rating Scale (BFMDRS) and the Unified Dystonia Rating Scale (UDRS) before and 2 years after surgery. Results: All patients showed significant functional improvement, averaging 65.3% in BFMDRS (P = .014) and 69.2% in UDRS (P = .025). Conclusions: These results suggest that B-STN-DBS may be an interesting treatment option for generalized dystonia, even for patients who have already undergone bilateral pallidotomy. (c) 2012 Movement Disorder Society

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Background The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. Findings For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. Conclusions For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi presenta il criterio di regolarità di Wiener dell’ambito classico dell’operatore di Laplace ed in seguito alcune nozioni di teoria del potenziale e la dimostrazione del criterio nel caso dell’operatore del calore; in questa seconda sezione viene dedicata particolare attenzione alle formule di media e ad una diseguaglianza forte di Harnack, che risultano fondamentali nella trattazione dell’argomento centrale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eine Gruppe G hat endlichen Prüferrang (bzw. Ko-zentralrang) kleiner gleich r, wenn für jede endlich erzeugte Gruppe H gilt: H (bzw. H modulo seinem Zentrum) ist r-erzeugbar. In der vorliegenden Arbeit werden, soweit möglich, die bekannten Sätze über Gruppen von endlichem Prüferrang (kurz X-Gruppen), auf die wesentlich größere Klasse der Gruppen mit endlichem Ko-zentralrang (kurz R-Gruppen) verallgemeinert.Für lokal nilpotente R-Gruppen, welche torsionsfrei oder p-Gruppen sind, wird gezeigt, dass die Zentrumsfaktorgruppe eine X-Gruppe sein muss. Es folgt, dass Hyperzentralität und lokale Nilpotenz für R-Gruppen identische Bediungungen sind. Analog hierzu sind R-Gruppen genau dann lokal auflösbar, wenn sie hyperabelsch sind. Zentral für die Strukturtheorie hyperabelscher R-Gruppen ist die Tatsache, dass solche Gruppen eine aufsteigende Normalreihe abelscher X-Gruppen besitzen. Es wird eine Sylowtheorie für periodische hyperabelsche R-Gruppen entwickelt. Für torsionsfreie hyperabelsche R-Gruppen wird deren Auflösbarkeit bewiesen. Des weiteren sind lokal endliche R-Gruppen fast hyperabelsch. Für R-Gruppen fallen sehr große Gruppenklassen mit den fast hyperabelschen Gruppen zusammen. Hierzu wird der Begriff der Sektionsüberdeckung eingeführt und gezeigt, dass R-Gruppen mit fast hyperabelscher Sektionsüberdeckung fast hyperabelsch sind.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The first part of the thesis concerns the study of inflation in the context of a theory of gravity called "Induced Gravity" in which the gravitational coupling varies in time according to the dynamics of the very same scalar field (the "inflaton") driving inflation, while taking on the value measured today since the end of inflation. Through the analytical and numerical analysis of scalar and tensor cosmological perturbations we show that the model leads to consistent predictions for a broad variety of symmetry-breaking inflaton's potentials, once that a dimensionless parameter entering into the action is properly constrained. We also discuss the average expansion of the Universe after inflation (when the inflaton undergoes coherent oscillations about the minimum of its potential) and determine the effective equation of state. Finally, we analyze the resonant and perturbative decay of the inflaton during (p)reheating. The second part is devoted to the study of a proposal for a quantum theory of gravity dubbed "Horava-Lifshitz (HL) Gravity" which relies on power-counting renormalizability while explicitly breaking Lorentz invariance. We test a pair of variants of the theory ("projectable" and "non-projectable") on a cosmological background and with the inclusion of scalar field matter. By inspecting the quadratic action for the linear scalar cosmological perturbations we determine the actual number of propagating degrees of freedom and realize that the theory, being endowed with less symmetries than General Relativity, does admit an extra gravitational degree of freedom which is potentially unstable. More specifically, we conclude that in the case of projectable HL Gravity the extra mode is either a ghost or a tachyon, whereas in the case of non-projectable HL Gravity the extra mode can be made well-behaved for suitable choices of a pair of free dimensionless parameters and, moreover, turns out to decouple from the low-energy Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present thesis is a contribution to the multi-variable theory of Bergman and Hardy Toeplitz operators on spaces of holomorphic functions over finite and infinite dimensional domains. In particular, we focus on certain spectral invariant Frechet operator algebras F closely related to the local symbol behavior of Toeplitz operators in F. We summarize results due to B. Gramsch et.al. on the construction of Psi_0- and Psi^*-algebras in operator algebras and corresponding scales of generalized Sobolev spaces using commutator methods, generalized Laplacians and strongly continuous group actions. In the case of the Segal-Bargmann space H^2(C^n,m) of Gaussian square integrable entire functions on C^n we determine a class of vector-fields Y(C^n) supported in complex cones K. Further, we require that for any finite subset V of Y(C^n) the Toeplitz projection P is a smooth element in the Psi_0-algebra constructed by commutator methods with respect to V. As a result we obtain Psi_0- and Psi^*-operator algebras F localized in cones K. It is an immediate consequence that F contains all Toeplitz operators T_f with a symbol f of certain regularity in an open neighborhood of K. There is a natural unitary group action on H^2(C^n,m) which is induced by weighted shifts and unitary groups on C^n. We examine the corresponding Psi^*-algebra A of smooth elements in Toeplitz-C^*-algebras. Among other results sufficient conditions on the symbol f for T_f to belong to A are given in terms of estimates on its Berezin-transform. Local aspects of the Szegö projection P_s on the Heisenbeg group and the corresponding Toeplitz operators T_f with symbol f are studied. In this connection we apply a result due to Nagel and Stein which states that for any strictly pseudo-convex domain U the projection P_s is a pseudodifferential operator of exotic type (1/2, 1/2). The second part of this thesis is devoted to the infinite dimensional theory of Bergman and Hardy spaces and the corresponding Toeplitz operators. We give a new proof of a result observed by Boland and Waelbroeck. Namely, that the space of all holomorphic functions H(U) on an open subset U of a DFN-space (dual Frechet nuclear space) is a FN-space (Frechet nuclear space) equipped with the compact open topology. Using the nuclearity of H(U) we obtain Cauchy-Weil-type integral formulas for closed subalgebras A in H_b(U), the space of all bounded holomorphic functions on U, where A separates points. Further, we prove the existence of Hardy spaces of holomorphic functions on U corresponding to the abstract Shilov boundary S_A of A and with respect to a suitable boundary measure on S_A. Finally, for a domain U in a DFN-space or a polish spaces we consider the symmetrizations m_s of measures m on U by suitable representations of a group G in the group of homeomorphisms on U. In particular,in the case where m leads to Bergman spaces of holomorphic functions on U, the group G is compact and the representation is continuous we show that m_s defines a Bergman space of holomorphic functions on U as well. This leads to unitary group representations of G on L^p- and Bergman spaces inducing operator algebras of smooth elements related to the symmetries of U.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.