889 resultados para discrete polynomial transform
Resumo:
Let N = {y > 0} and S = {y < 0} be the semi-planes of R-2 having as common boundary the line D = {y = 0}. Let X and Y be polynomial vector fields defined in N and S, respectively, leading to a discontinuous piecewise polynomial vector field Z = (X, Y). This work pursues the stability and the transition analysis of solutions of Z between N and S, started by Filippov (1988) and Kozlova (1984) and reformulated by Sotomayor-Teixeira (1995) in terms of the regularization method. This method consists in analyzing a one parameter family of continuous vector fields Z(epsilon), defined by averaging X and Y. This family approaches Z when the parameter goes to zero. The results of Sotomayor-Teixeira and Sotomayor-Machado (2002) providing conditions on (X, Y) for the regularized vector fields to be structurally stable on planar compact connected regions are extended to discontinuous piecewise polynomial vector fields on R-2. Pertinent genericity results for vector fields satisfying the above stability conditions are also extended to the present case. A procedure for the study of discontinuous piecewise vector fields at infinity through a compactification is proposed here.
Resumo:
For a locally compact Hausdorff space K and a Banach space X we denote by C-0(K, X) the space of X-valued continuous functions on K which vanish at infinity, provided with the supremum norm. Let n be a positive integer, Gamma an infinite set with the discrete topology, and X a Banach space having non-trivial cotype. We first prove that if the nth derived set of K is not empty, then the Banach-Mazur distance between C-0(Gamma, X) and C-0(K, X) is greater than or equal to 2n + 1. We also show that the Banach-Mazur distance between C-0(N, X) and C([1, omega(n)k], X) is exactly 2n + 1, for any positive integers n and k. These results extend and provide a vector-valued version of some 1970 Cambern theorems, concerning the cases where n = 1 and X is the scalar field.
Resumo:
The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We deal with the optimization of the production of branched sheet metal products. New forming techniques for sheet metal give rise to a wide variety of possible profiles and possible ways of production. In particular, we show how the problem of producing a given profile geometry can be modeled as a discrete optimization problem. We provide a theoretical analysis of the model in order to improve its solution time. In this context we give the complete convex hull description of some substructures of the underlying polyhedron. Moreover, we introduce a new class of facet-defining inequalities that represent connectivity constraints for the profile and show how these inequalities can be separated in polynomial time. Finally, we present numerical results for various test instances, both real-world and academic examples.
Resumo:
This study aimed to evaluate the chemical interaction of collagen with some substances usually applied in dental treatments to increase the durability of adhesive restorations to dentin. Initially, the similarity between human dentin collagen and type I collagen obtained from commercial bovine membranes of Achilles deep tendon was compared by the Attenuated Total Reflectance technique of Fourier Transform Infrared (ATR-FTIR) spectroscopy. Finally, the effects of application of 35% phosphoric acid, 0.1M ethylenediaminetetraacetic acid (EDTA), 2% chlorhexidine, and 6.5% proanthocyanidin solution on microstructure of collagen and in the integrity of its triple helix were also evaluated by ATR-FTIR. It was observed that the commercial type I collagen can be used as an efficient substitute for demineralized human dentin in studies that use spectroscopy analysis. The 35% phosphoric acid significantly altered the organic content of amides, proline and hydroxyproline of type I collagen. The surface treatment with 0.1M EDTA, 2% chlorhexidine, or 6.5% proanthocyanidin did not promote deleterious structural changes to the collagen triple helix. The application of 6.5% proanthocyanidin on collagen promoted hydrogen bond formation. (c) 2012 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 2012.
Resumo:
We report self-similar properties of periodic structures remarkably organized in the two-parameter space for a two-gene system, described by two-dimensional symmetric map. The map consists of difference equations derived from the chemical reactions for gene expression and regulation. We characterize the system by using Lyapunov exponents and isoperiodic diagrams identifying periodic windows, denominated Arnold tongues and shrimp-shaped structures. Period-adding sequences are observed for both periodic windows. We also identify Fibonacci-type series and Golden ratio for Arnold tongues, and period multiple-of-three windows for shrimps. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper presents preliminary results to determine small displacements of a global positioning system (GPS) antenna fastened to a structure using only one L1 GPS receiver. Vibrations, periodic or not, are common in large structures, such as bridges, footbridges, tall buildings, and towers under dynamic loads. The behavior in time and frequency leads to structural analysis studies. The hypothesis of this article is that any large structure that presents vibrations in the centimeter-to-millimeter range can be monitored by phase measurements of a single L1 receiver with a high data rate, as long as the direction of the displacement is pointing to a particular satellite. Within this scenario, the carrier phase will be modulated by antenna displacement. During a period of a few dozen seconds, the relative displacement to the satellite, the satellite clock, and the atmospheric phase delays can be assumed as a polynomial time function. The residuals from a polynomial adjustment contain the phase modulation owing to small displacements, random noise, receiver clock short time instabilities, and multipath. The results showed that it is possible to detect displacements of centimeters in the phase data of a single satellite and millimeters in the difference between the phases of two satellites. After applying a periodic nonsinusoidal displacement of 10 m to the antenna, it is clearly recovered in the difference of the residuals. The time domain spectrum obtained by the fast Fourier transform (FFT) exhibited a defined peak of the third harmonic much more than the random noise using the proposed third-degree polynomial model. DOI: 10.1061/(ASCE)SU.1943-5428.0000070. (C) 2012 American Society of Civil Engineers.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This Letter reports an investigation on the optical properties of copper nanocubes as a function of size as modeled by the discrete dipole approximation. In the far-field, our results showed that the extinction resonances shifted from 595 to 670 nm as the size increased from 20 to 100 nm. Also, the highest optical efficiencies for absorption and scattering were obtained for nanocubes that were 60 and 100 nm in size, respectively. In the near-field, the electric-field amplitudes were investigated considering 514, 633 and 785 nm as the excitation wavelengths. The E-fields increased with size, being the highest at 633 nm. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
We prove that any two Poisson dependent elements in a free Poisson algebra and a free Poisson field of characteristic zero are algebraically dependent, thus answering positively a question from Makar-Limanov and Umirbaev (2007) [8]. We apply this result to give a new proof of the tameness of automorphisms for free Poisson algebras of rank two (see Makar-Limanov and Umirbaev (2011) [9], Makar-Limanov et al. (2009) [10]). (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This work is supported by Brazilian agencies Fapesp, CAPES and CNPq
Resumo:
Polynomial Chaos Expansion (PCE) is widely recognized as a flexible tool to represent different types of random variables/processes. However, applications to real, experimental data are still limited. In this article, PCE is used to represent the random time-evolution of metal corrosion growth in marine environments. The PCE coefficients are determined in order to represent data of 45 corrosion coupons tested by Jeffrey and Melchers (2001) at Taylors Beach, Australia. Accuracy of the representation and possibilities for model extrapolation are considered in the study. Results show that reasonably accurate smooth representations of the corrosion process can be obtained. The representation is not better because a smooth model is used to represent non-smooth corrosion data. Random corrosion leads to time-variant reliability problems, due to resistance degradation over time. Time variant reliability problems are not trivial to solve, especially under random process loading. Two example problems are solved herein, showing how the developed PCE representations can be employed in reliability analysis of structures subject to marine corrosion. Monte Carlo Simulation is used to solve the resulting time-variant reliability problems. However, an accurate and more computationally efficient solution is also presented.
Resumo:
The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
[EN]We present a new strategy for constructing tensor product spline spaces over quadtree and octree T-meshes. The proposed technique includes some simple rules for inferring local knot vectors to define spline blending functions. These rules allow to obtain for a given T-mesh a set of cubic spline functions that span a space with nice properties: it can reproduce cubic polynomials, the functions are C2-continuous, linearly independent, and spaces spanned by nested T-meshes are also nested. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0-balanced quadtree or octree. ..