938 resultados para Finite-dimensional discrete phase spaces
Resumo:
This paper presents a model predictive current control applied to a proposed single-phase five-level active rectifier (FLAR). This current control strategy uses the discrete-time nature of the active rectifier to define its state in each sampling interval. Although the switching frequency is not constant, this current control strategy allows to follow the reference with low total harmonic distortion (THDF). The implementation of the active rectifier that was used to obtain the experimental results is described in detail along the paper, presenting the circuit topology, the principle of operation, the power theory, and the current control strategy. The experimental results confirm the robustness and good performance (with low current THDF and controlled output voltage) of the proposed single-phase FLAR operating with model predictive current control.
Resumo:
Tese de Doutoramento em Engenharia Civil.
Resumo:
We explore which types of finiteness properties are possible for intersections of geometrically finite groups of isometries in negatively curved symmetric rank one spaces. Our main tool is a twist construction which takes as input a geometrically finite group containing a normal subgroup of infinite index with given finiteness properties and infinite Abelian quotient, and produces a pair of geometrically finite groups whose intersection is isomorphic to the normal subgroup.
Resumo:
In this paper we prove T1 type necessary and sufficient conditions for the boundedness on inhomogeneous Lipschitz spaces of fractional integrals and singular integrals defined on a measure metric space whose measure satisfies a n-dimensional growth. We also show that hypersingular integrals are bounded on these spaces.
Resumo:
We propose a mixed finite element method for a class of nonlinear diffusion equations, which is based on their interpretation as gradient flows in optimal transportation metrics. We introduce an appropriate linearization of the optimal transport problem, which leads to a mixed symmetric formulation. This formulation preserves the maximum principle in case of the semi-discrete scheme as well as the fully discrete scheme for a certain class of problems. In addition solutions of the mixed formulation maintain exponential convergence in the relative entropy towards the steady state in case of a nonlinear Fokker-Planck equation with uniformly convex potential. We demonstrate the behavior of the proposed scheme with 2D simulations of the porous medium equations and blow-up questions in the Patlak-Keller-Segel model.
Application of standard and refined heat balance integral methods to one-dimensional Stefan problems
Resumo:
The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions to enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.
Resumo:
We investigate in this note the dynamics of a one-dimensional Keller-Segel type model on the half-line. On the contrary to the classical configuration, the chemical production term is located on the boundary. We prove, under suitable assumptions, the following dichotomy which is reminiscent of the two-dimensional Keller-Segel system. Solutions are global if the mass is below the critical mass, they blow-up in finite time above the critical mass, and they converge to some equilibrium at the critical mass. Entropy techniques are presented which aim at providing quantitative convergence results for the subcritical case. This note is completed with a brief introduction to a more realistic model (still one-dimensional).
Resumo:
Is the cohomology of the classifying space of a p-compact group, with Noetherian twisted coefficients, a Noetherian module? This note provides, over the ring of p-adic integers, such a generalization to p-compact groups of the Evens-Venkov Theorem. We consider the cohomology of a space with coefficients in a module, and we compare Noetherianity over the field with p elements, with Noetherianity over the p-adic integers, in the case when the fundamental group is a finite p-group.
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table
Resumo:
The Western Alpine Are has been created during the Cretaceous and the Tertiary orogenies. The interference patterns of the Tertiary structures suggest their formation during continental collision of the European and the Adriatic Plates, with an accompanying anticlockwise rotation of the Adriatic indenter. Extensional structures are mainly related to ductile deformation by simple shear. These structures developed at a deep tectonic level, in granitic crustal rocks, at depths in excess of 10 km. In the early Palaeogene period of the Tertiary Orogeny, the main Tertiary nappe emplacement resulted from a NW-thrusting of the Austroalpine, Penninic and Helvetic nappes. Heating of the deep zone of the Upper Cretaceous and Tertiary nappe stack by geothermal heat flow is responsible for the Tertiary regional metamorphism, reaching amphibolite-facies conditions in the Lepontine Gneiss Dome (geothermal gradient 25 degrees C/ km). The Tertiary thrusting occurred mainly during prograde metamorphic conditions with creation of a penetrative NW-SE-oriented stretching lineation, X(1) (finite extension), parallel to the direction of simple shear. Earliest cooling after the culmination of the Tertiary metamorphism, some 38 Ma ago, is recorded by the cooling curves of the Monte Rosa and Mischabel nappes to the west and the Suretta Nappe to the east of the Lepontine Gneiss Dome. The onset of dextral transpression, with a strong extension parallel to the mountain belt, and the oldest S-vergent `'backfolding'' took place some 35 to 30 Ma ago during retrograde amphibolite-facies conditions and before the intrusion of the Oligocene dikes north of the Periadriatic Line. The main updoming of the Lepontine Gneiss Dome started some 32-30 Ma ago with the intrusion of the Bergell tonalites and granodiorites, concomitant with S-vergent backfolding and backthrusting and dextral strike-slip movements along the Tonale and Canavese Lines (Argand's Insubric phase). Subsequently, the center of main updoming migrated slowly to the west, reaching the Simplon region some 20 Ma ago. This was contemporaneous with the westward migration of the Adriatic indenter. Between 20 Ma and the present, the Western Aar Massif-Toce culmination was the center of strong uplift. The youngest S-vergent backfolds, the Glishorn anticline and the Berisal syncline fold the 12 Ma Rb/Sr biotite isochron and are cut by the 11 Ma old Rhone-Simplon Line. The discrete Rhone-Simplon Line represents a late retrograde manifestation in the preexisting ductile Simplon Shear Zone. This fault zone is still active today. The Oligocene-Neogene dextral transpression and extension in the Simplon area were concurrent with thrusting to the northwest of the Helvetic nappes, the Prealpes (35-15 Ma) and with the Jura thin-skinned thrust (11-3 Ma). It was also contemporaneous with thrusting to the south of the Bergamasc (> 35-5 Ma) and Milan thrusts (16-5 Ma).
Resumo:
Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system
Resumo:
In this paper a one-phase supercooled Stefan problem, with a nonlinear relation between the phase change temperature and front velocity, is analysed. The model with the standard linear approximation, valid for small supercooling, is first examined asymptotically. The nonlinear case is more difficult to analyse and only two simple asymptotic results are found. Then, we apply an accurate heat balance integral method to make further progress. Finally, we compare the results found against numerical solutions. The results show that for large supercooling the linear model may be highly inaccurate and even qualitatively incorrect. Similarly as the Stefan number β → 1&sup&+&/sup& the classic Neumann solution which exists down to β =1 is far from the linear and nonlinear supercooled solutions and can significantly overpredict the solidification rate.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
We report experimental and numerical results showing how certain N-dimensional dynamical systems are able to exhibit complex time evolutions based on the nonlinear combination of N-1 oscillation modes. The experiments have been done with a family of thermo-optical systems of effective dynamical dimension varying from 1 to 6. The corresponding mathematical model is an N-dimensional vector field based on a scalar-valued nonlinear function of a single variable that is a linear combination of all the dynamic variables. We show how the complex evolutions appear associated with the occurrence of successive Hopf bifurcations in a saddle-node pair of fixed points up to exhaust their instability capabilities in N dimensions. For this reason the observed phenomenon is denoted as the full instability behavior of the dynamical system. The process through which the attractor responsible for the observed time evolution is formed may be rather complex and difficult to characterize. Nevertheless, the well-organized structure of the time signals suggests some generic mechanism of nonlinear mode mixing that we associate with the cluster of invariant sets emerging from the pair of fixed points and with the influence of the neighboring saddle sets on the flow nearby the attractor. The generation of invariant tori is likely during the full instability development and the global process may be considered as a generalized Landau scenario for the emergence of irregular and complex behavior through the nonlinear superposition of oscillatory motions
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning