274 resultados para UPFC, zeros
Resumo:
This paper examines a dataset which is modeled well by the Poisson-Log Normal process and by this process mixed with Log Normal data, which are both turned into compositions. This generates compositional data that has zeros without any need for conditional models or assuming that there is missing or censored data that needs adjustment. It also enables us to model dependence on covariates and within the composition
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositional data. Nevertheless, the use of this methodology is only possible for those data sets without null values. Consequently, in those data sets where the zeros are present, a previous treatment becomes necessary. Last advances in the treatment of compositional zeros have been centered especially in the zeros of structural nature and in the rounded zeros. These tools do not contemplate the particular case of count compositional data sets with null values. In this work we deal with \count zeros" and we introduce a treatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichlet probability distribution as a prior and we estimate the posterior probabilities. Then we apply a multiplicative modi¯cation for the non-zero values. We present a case study where this new methodology is applied. Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
The R-package “compositions”is a tool for advanced compositional analysis. Its basic functionality has seen some conceptual improvement, containing now some facilities to work with and represent ilr bases built from balances, and an elaborated subsys- tem for dealing with several kinds of irregular data: (rounded or structural) zeroes, incomplete observations and outliers. The general approach to these irregularities is based on subcompositions: for an irregular datum, one can distinguish a “regular” sub- composition (where all parts are actually observed and the datum behaves typically) and a “problematic” subcomposition (with those unobserved, zero or rounded parts, or else where the datum shows an erratic or atypical behaviour). Systematic classification schemes are proposed for both outliers and missing values (including zeros) focusing on the nature of irregularities in the datum subcomposition(s). To compute statistics with values missing at random and structural zeros, a projection approach is implemented: a given datum contributes to the estimation of the desired parameters only on the subcompositon where it was observed. For data sets with values below the detection limit, two different approaches are provided: the well-known imputation technique, and also the projection approach. To compute statistics in the presence of outliers, robust statistics are adapted to the characteristics of compositional data, based on the minimum covariance determinant approach. The outlier classification is based on four different models of outlier occur- rence and Monte-Carlo-based tests for their characterization. Furthermore the package provides special plots helping to understand the nature of outliers in the dataset. Keywords: coda-dendrogram, lost values, MAR, missing data, MCD estimator, robustness, rounded zeros
Resumo:
The quantitative estimation of Sea Surface Temperatures from fossils assemblages is a fundamental issue in palaeoclimatic and paleooceanographic investigations. The Modern Analogue Technique, a widely adopted method based on direct comparison of fossil assemblages with modern coretop samples, was revised with the aim of conforming it to compositional data analysis. The new CODAMAT method was developed by adopting the Aitchison metric as distance measure. Modern coretop datasets are characterised by a large amount of zeros. The zero replacement was carried out by adopting a Bayesian approach to the zero replacement, based on a posterior estimation of the parameter of the multinomial distribution. The number of modern analogues from which reconstructing the SST was determined by means of a multiple approach by considering the Proxies correlation matrix, Standardized Residual Sum of Squares and Mean Squared Distance. This new CODAMAT method was applied to the planktonic foraminiferal assemblages of a core recovered in the Tyrrhenian Sea. Kew words: Modern analogues, Aitchison distance, Proxies correlation matrix, Standardized Residual Sum of Squares
Resumo:
Los objetivos de la tesis son: 1.- Estudiar la relación entre la incidencia y mortalidad por cáncer y los factores medioambientales, en particular la contaminación atmosférica, controlando por factores socioeconómicos. 2.- Utilizar aquellos métodos de estadística espacial apropiados para cada tipo de diseño. 3.- Distinguir en los modelos las diferentes fuentes de extra-variabilidad espacial. 4.- Controlar el problema de exceso de ceros inherente a alguna de las neoplasias de interés medioambientales. Conclusiones: - Tanto la incidencia como la mortalidad de las neoplasias, presentaron dos fuentes de extravariación. La extravariaicón espacial, por la que unidades vecinas tienden a presentar razones de incidencia/mortalidad similares, y la heterogeneidad no espacial. En general la extravariabilidad espacial ha resultado ser mucho mayor que la no espacial. - Para suavizar las RIE/RME correspondientes a variables con un porcentaje de ceros superior al40-50% debe utilizarse un modelo que capture este comportamiento. - El mejor modelo en términos de ajuste para recoger el exceso de ceros en las variables de interés ha resultado ser el modelo mixto de riesgo relativo. - Las RIE/RME suavizadas presentan un patrón geográfico claro sólo en algunas neoplasias de interés medioambiental. - Parte de la variabilidad remanente en las RIE/RME suavizadas pudo ser explicada mediante la introducción de variables explicativas, en particular la contaminación atmosférica y variables socioeconómicas. -Como los contaminantes atmosféricos fueron observados en un diseño geoestadístico y las neoplasias de interés mediambiental lo fueron en un diseño en rejilla se modelizó la superficie de exposición. - El efecto del contaminante en cada municipio/sección censal se aproximó introduciendo en el modelo el valor promedio en cada área y la variabilidad intra-área. - El efecto del contaminante se consideró aleatorio, en el sentido de que podría ser diferente en cada una de las áreas. - Las condiciones socioeconómicas fueron otra de las variables que redujeron la variabilidad remanente en las RIE/RME suavizadas. -Las variables explicativas observadas con un diseño en rejilla, como el índice de privación, se introdujeron en el modelo como efectos fijos. - El efecto de la privación sobre la incidencia y/o mortalidad por cáncer de tráquea, bronquios y pulmón, controlando por contaminantes atmosféricos, fue mayor en las mujeres que en los hombres. -Altas concentraciones de contaminantes atmosféricos aumentan el riesgo de padecer neoplasias de interés medioambiental, controlando por condiciones socioeconómicas.
Resumo:
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-, point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs. Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer. The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects of mind and. perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates computation to geometrical bodies. We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to a general-linear perspex-machine which is very much easier to pro-ram than the original perspex-machine. We then show how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines. We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it that the perspex machine provides an upper bound on the computational properties of physical things, including manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Resumo:
A simple parameter adaptive controller design methodology is introduced in which steady-state servo tracking properties provide the major control objective. This is achieved without cancellation of process zeros and hence the underlying design can be applied to non-minimum phase systems. As with other self-tuning algorithms, the design (user specified) polynomials of the proposed algorithm define the performance capabilities of the resulting controller. However, with the appropriate definition of these polynomials, the synthesis technique can be shown to admit different adaptive control strategies, e.g. self-tuning PID and self-tuning pole-placement controllers. The algorithm can therefore be thought of as an embodiment of other self-tuning design techniques. The performances of some of the resulting controllers are illustrated using simulation examples and the on-line application to an experimental apparatus.
Resumo:
In this paper we derive novel approximations to trapped waves in a two-dimensional acoustic waveguide whose walls vary slowly along the guide, and at which either Dirichlet (sound-soft) or Neumann (sound-hard) conditions are imposed. The guide contains a single smoothly bulging region of arbitrary amplitude, but is otherwise straight, and the modes are trapped within this localised increase in width. Using a similar approach to that in Rienstra (2003), a WKBJ-type expansion yields an approximate expression for the modes which can be present, which display either propagating or evanescent behaviour; matched asymptotic expansions are then used to derive connection formulae which bridge the gap across the cut-off between propagating and evanescent solutions in a tapering waveguide. A uniform expansion is then determined, and it is shown that appropriate zeros of this expansion correspond to trapped mode wavenumbers; the trapped modes themselves are then approximated by the uniform expansion. Numerical results determined via a standard iterative method are then compared to results of the full linear problem calculated using a spectral method, and the two are shown to be in excellent agreement, even when $\epsilon$, the parameter characterising the slow variations of the guide’s walls, is relatively large.
Resumo:
In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.
Resumo:
Let H ∈ C 2(ℝ N×n ), H ≥ 0. The PDE system arises as the Euler-Lagrange PDE of vectorial variational problems for the functional E ∞(u, Ω) = ‖H(Du)‖ L ∞(Ω) defined on maps u: Ω ⊆ ℝ n → ℝ N . (1) first appeared in the author's recent work. The scalar case though has a long history initiated by Aronsson. Herein we study the solutions of (1) with emphasis on the case of n = 2 ≤ N with H the Euclidean norm on ℝ N×n , which we call the “∞-Laplacian”. By establishing a rigidity theorem for rank-one maps of independent interest, we analyse a phenomenon of separation of the solutions to phases with qualitatively different behaviour. As a corollary, we extend to N ≥ 2 the Aronsson-Evans-Yu theorem regarding non existence of zeros of |Du| and prove a maximum principle. We further characterise all H for which (1) is elliptic and also study the initial value problem for the ODE system arising for n = 1 but with H(·, u, u′) depending on all the arguments.
Resumo:
We study an one-dimensional nonlinear reaction-diffusion system coupled on the boundary. Such system comes from modeling problems of temperature distribution on two bars of same length, jointed together, with different diffusion coefficients. We prove the transversality property of unstable and stable manifolds assuming all equilibrium points are hyperbolic. To this end, we write the system as an equation with noncontinuous diffusion coefficient. We then study the nonincreasing property of the number of zeros of a linearized nonautonomous equation as well as the Sturm-Liouville properties of the solutions of a linear elliptic problem. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
The seismic processing technique has the main objective to provide adequate picture of geological structures from subsurface of sedimentary basins. Among the key steps of this process is the enhancement of seismic reflections by filtering unwanted signals, called seismic noise, the improvement of signals of interest and the application of imaging procedures. The seismic noise may appear random or coherent. This dissertation will present a technique to attenuate coherent noise, such as ground roll and multiple reflections, based on Empirical Mode Decomposition method. This method will be applied to decompose the seismic trace into Intrinsic Mode Functions. These functions have the properties of being symmetric, with local mean equals zero and the same number of zero-crossing and extremes. The developed technique was tested on synthetic and real data, and the results were considered encouraging
Resumo:
There are two main approaches for using in adaptive controllers. One is the so-called model reference adaptive control (MRAC), and the other is the so-called adaptive pole placement control (APPC). In MRAC, a reference model is chosen to generate the desired trajectory that the plant output has to follow, and it can require cancellation of the plant zeros. Due to its flexibility in choosing the controller design methodology (state feedback, compensator design, linear quadratic, etc.) and the adaptive law (least squares, gradient, etc.), the APPC is the most general type of adaptive control. Traditionally, it has been developed in an indirect approach and, as an advantage, it may be applied to non-minimum phase plants, because do not involve plant zero-pole cancellations. The integration to variable structure systems allows to aggregate fast transient and robustness to parametric uncertainties and disturbances, as well. In this work, a variable structure adaptive pole placement control (VS-APPC) is proposed. Therefore, new switching laws are proposed, instead of using the traditional integral adaptive laws. Additionally, simulation results for an unstable first order system and simulation and practical results for a three-phase induction motor are shown
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)