14 resultados para Gaussian integers

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Mathematics literature some records highlight the difficulties encountered in the teaching-learning process of integers. In the past, and for a long time, many mathematicians have experienced and overcome such difficulties, which become epistemological obstacles imposed on the students and teachers nowadays. The present work comprises the results of a research conducted in the city of Natal, Brazil, in the first half of 2010, at a state school and at a federal university. It involved a total of 45 students: 20 middle high, 9 high school and 16 university students. The central aim of this study was to identify, on the one hand, which approach used for the justification of the multiplication between integers is better understood by the students and, on the other hand, the elements present in the justifications which contribute to surmount the epistemological obstacles in the processes of teaching and learning of integers. To that end, we tried to detect to which extent the epistemological obstacles faced by the students in the learning of integers get closer to the difficulties experienced by mathematicians throughout human history. Given the nature of our object of study, we have based the theoretical foundation of our research on works related to the daily life of Mathematics teaching, as well as on theorists who analyze the process of knowledge building. We conceived two research tools with the purpose of apprehending the following information about our subjects: school life; the diagnosis on the knowledge of integers and their operations, particularly the multiplication of two negative integers; the understanding of four different justifications, as elaborated by mathematicians, for the rule of signs in multiplication. Regarding the types of approach used to explain the rule of signs arithmetic, geometric, algebraic and axiomatic , we have identified in the fieldwork that, when multiplying two negative numbers, the students could better understand the arithmetic approach. Our findings indicate that the approach of the rule of signs which is considered by the majority of students to be the easiest one can be used to help understand the notion of unification of the number line, an obstacle widely known nowadays in the process of teaching-learning

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present dissertation analyses Leonhard Euler´s early mathematical work as Diophantine Equations, De solutione problematum diophanteorum per números íntegros (On the solution of Diophantine problems in integers). It was published in 1738, although it had been presented to the St Petersburg Academy of Science five years earlier. Euler solves the problem of making the general second degree expression a perfect square, i.e., he seeks the whole number solutions to the equation ax2+bx+c = y2. For this purpose, he shows how to generate new solutions from those already obtained. Accordingly, he makes a succession of substitutions equating terms and eliminating variables until the problem reduces to finding the solution of the Pell Equation. Euler erroneously assigns this type of equation to Pell. He also makes a number of restrictions to the equation ax2+bx+c = y and works on several subthemes, from incomplete equations to polygonal numbers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most algorithms for state estimation based on the classical model are just adequate for use in transmission networks. Few algorithms were developed specifically for distribution systems, probably because of the little amount of data available in real time. Most overhead feeders possess just current and voltage measurements at the middle voltage bus-bar at the substation. In this way, classical algorithms are of difficult implementation, even considering off-line acquired data as pseudo-measurements. However, the necessity of automating the operation of distribution networks, mainly in regard to the selectivity of protection systems, as well to implement possibilities of load transfer maneuvers, is changing the network planning policy. In this way, some equipments incorporating telemetry and command modules have been installed in order to improve operational features, and so increasing the amount of measurement data available in real-time in the System Operation Center (SOC). This encourages the development of a state estimator model, involving real-time information and pseudo-measurements of loads, that are built from typical power factors and utilization factors (demand factors) of distribution transformers. This work reports about the development of a new state estimation method, specific for radial distribution systems. The main algorithm of the method is based on the power summation load flow. The estimation is carried out piecewise, section by section of the feeder, going from the substation to the terminal nodes. For each section, a measurement model is built, resulting in a nonlinear overdetermined equations set, whose solution is achieved by the Gaussian normal equation. The estimated variables of a section are used as pseudo-measurements for the next section. In general, a measurement set for a generic section consists of pseudo-measurements of power flows and nodal voltages obtained from the previous section or measurements in real-time, if they exist -, besides pseudomeasurements of injected powers for the power summations, whose functions are the load flow equations, assuming that the network can be represented by its single-phase equivalent. The great advantage of the algorithm is its simplicity and low computational effort. Moreover, the algorithm is very efficient, in regard to the accuracy of the estimated values. Besides the power summation state estimator, this work shows how other algorithms could be adapted to provide state estimation of middle voltage substations and networks, namely Schweppes method and an algorithm based on current proportionality, that is usually adopted for network planning tasks. Both estimators were implemented not only as alternatives for the proposed method, but also looking for getting results that give support for its validation. Once in most cases no power measurement is performed at beginning of the feeder and this is required for implementing the power summation estimations method, a new algorithm for estimating the network variables at the middle voltage bus-bar was also developed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mira and R Coronae Borealis (R CrB) variable stars are evolved objects surrounded by circumstellar envelopes (CSE) composed of the ejected stellar material. We present a detailed high-spatial resolution morfological study of the CSE of three stars: IRC+10216, the closest and more studied Carbon-Rich Mira; o Ceti, the prototype of the Mira class; and RY Sagitarii (RY Sgr), the brightest R CrB variable of the south hemisphere. JHKL near-infrared adaptive optics images of IRC+10216 with high dynamic range and Vband images with high angular resolution and high depth, collected with the VLT/NACO and VLT/FORS1 instruments, were analyzed. NACO images of o Ceti were also analyzed. Interferometric observations of RY Sgr collected with the VLTI/MIDI instrument allowed us to explore its CSE innermost regions (»20 40 mas). The CSE of IRC+10216 exhibit, in near-infrared, clumps with more complex relative displacements than proposed in previous studies. In V-band, the majority of the non-concentric shells, located in the outer CSE layers, seem to be composed of thinner elongated shells. In a global view, the morphological connection between the shells and the bipolar core of the nebulae, located in the outer layers, together with the clumps, located in the innermost regions, has a difficult interpretation. In the CSE of o Ceti, preliminar results would be indicating the presence of possible clumps. In the innermost regions (.110 UA) of the CSE of RY Sgr, two clouds were detected in different epochs, embedded in a variable gaussian envelope. Based on a rigorous verification, the first cloud was located at »100 R¤ (or »30 AU) from the centre, toward the east-north-east direction (modulo 180o) and the second one was almost at a perpendicular direction, having aproximately 2£ the distance of the first cloud. This study introduces new constraints to the mass-loss history of these kind of variables and to the morphology of their innermost CSE regions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we study a connection between a non-Gaussian statistics, the Kaniadakis statistics, and Complex Networks. We show that the degree distribution P(k)of a scale free-network, can be calculated using a maximization of information entropy in the context of non-gaussian statistics. As an example, a numerical analysis based on the preferential attachment growth model is discussed, as well as a numerical behavior of the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive epidemic process (DEP) on a regular lattice one-dimensional. The model is composed of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active an inactive states. We investigate the critical behavior of the DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points (MASCP). We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases DA =DB, DA DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime o DA >DB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ising and m-vector spin-glass models are studied, in the limit of infinite-range in-teractions, through the replica method. First, the m-vector spin glass, in the presence of an external uniform magnetic field, as well as of uniaxial anisotropy fields, is consi-dered. The effects of the anisotropics on the phase diagrams, and in particular, on the Gabay-Toulouse line, which signals the transverse spin-glass ordering, are investigated. The changes in the Gabay-Toulouse line, due to the presence of anisotropy fields which favor spin orientations along the Cartesian axes (m = 2: planar anisotropy; m = 3: cubic anisotropy), are also studied. The antiferromagnetic Ising spin glass, in the presence of uniform and Gaussian random magnetic fields, is investigated through a two-sublattice generalization of the Sherrington-Kirpaktrick model. The effects of the magnetic-field randomness on the phase diagrams of the model are analysed. Some confrontations of the present results with experimental observations available in the literature are discussed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we have studied the effects of random biquadratic and random fields in spin-glass models using the replica method. The effect of a random biquadratic coupling was studied in two spin-1 spin-glass models: in one case the interactions occur between pairs of spins, whereas in the second one the interactions occur between p spins and the limit p > oo is considered. Both couplings (spin glass and biquadratic) have zero-mean Gaussian probability distributions. In the first model, the replica-symmetric assumption reveals that the system presents two pha¬ses, namely, paramagnetic and spin-glass, separated by a continuous transition line. The stability analysis of the replica-symmetric solution yields, besides the usual instability associated with the spin-glass ordering, a new phase due to the random biquadratic cou¬plings between the spins. For the case p oo, the replica-symmetric assumption yields again only two phases, namely, paramagnetic and quadrupolar. In both these phases the spin-glass parameter is zero. Besides, it is shown that they are stable under the Almeida-Thouless stability analysis. One of them presents negative entropy at low temperatures. We developed one step of replica simmetry breaking and noticed that a new phase, the biquadratic glass phase, emerge. In this way we have obtained the correct phase diagram, with.three first-order transition lines. These lines merges in a common triple point. The effects of random fields were studied in the Sherrington-Kirkpatrick model consi¬dered in the presence of an external random magnetic field following a trimodal distribu¬tion {P{hi) = p+S(hi - h0) +Po${hi) +pS(hi + h0))- It is shown that the border of the ferromagnetic phase may present, for conveniently chosen values of p0 and hQ, first-order phase transitions, as well as tricritical points at finite temperatures. It is verified that the first-order phase transitions are directly related to the dilution in the fields: the extensions of these transitions are reduced for increasing values of po- In fact, the threshold value pg, above which all phase transitions are continuous, is calculated analytically. The stability analysis of the replica-symmetric solution is performed and the regions of validity of such a solution are identified

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent astronomical observations indicate that the universe has null spatial curvature, is accelerating and its matter-energy content is composed by circa 30% of matter (baryons + dark matter) and 70% of dark energy, a relativistic component with negative pressure. However, in order to built more realistic models it is necessary to consider the evolution of small density perturbations for explaining the richness of observed structures in the scale of galaxies and clusters of galaxies. The structure formation process was pioneering described by Press and Schechter (PS) in 1974, by means of the galaxy cluster mass function. The PS formalism establishes a Gaussian distribution for the primordial density perturbation field. Besides a serious normalization problem, such an approach does not explain the recent cluster X-ray data, and it is also in disagreement with the most up-to-date computational simulations. In this thesis, we discuss several applications of the nonextensive q-statistics (non-Gaussian), proposed in 1988 by C. Tsallis, with special emphasis in the cosmological process of the large structure formation. Initially, we investigate the statistics of the primordial fluctuation field of the density contrast, since the most recent data from the Wilkinson Microwave Anisotropy Probe (WMAP) indicates a deviation from gaussianity. We assume that such deviations may be described by the nonextensive statistics, because it reduces to the Gaussian distribution in the limit of the free parameter q = 1, thereby allowing a direct comparison with the standard theory. We study its application for a galaxy cluster catalog based on the ROSAT All-Sky Survey (hereafter HIFLUGCS). We conclude that the standard Gaussian model applied to HIFLUGCS does not agree with the most recent data independently obtained by WMAP. Using the nonextensive statistics, we obtain values much more aligned with WMAP results. We also demonstrate that the Burr distribution corrects the normalization problem. The cluster mass function formalism was also investigated in the presence of the dark energy. In this case, constraints over several cosmic parameters was also obtained. The nonextensive statistics was implemented yet in 2 distinct problems: (i) the plasma probe and (ii) in the Bremsstrahlung radiation description (the primary radiation from X-ray clusters); a problem of considerable interest in astrophysics. In another line of development, by using supernova data and the gas mass fraction from galaxy clusters, we discuss a redshift variation of the equation of state parameter, by considering two distinct expansions. An interesting aspect of this work is that the results do not need a prior in the mass parameter, as usually occurs in analyzes involving only supernovae data.Finally, we obtain a new estimate of the Hubble parameter, through a joint analysis involving the Sunyaev-Zeldovich effect (SZE), the X-ray data from galaxy clusters and the baryon acoustic oscillations. We show that the degeneracy of the observational data with respect to the mass parameter is broken when the signature of the baryon acoustic oscillations as given by the Sloan Digital Sky Survey (SDSS) catalog is considered. Our analysis, based on the SZE/X-ray data for a sample of 25 galaxy clusters with triaxial morphology, yields a Hubble parameter in good agreement with the independent studies, provided by the Hubble Space Telescope project and the recent estimates of the WMAP

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The segmentation of an image aims to subdivide it into constituent regions or objects that have some relevant semantic content. This subdivision can also be applied to videos. However, in these cases, the objects appear in various frames that compose the videos. The task of segmenting an image becomes more complex when they are composed of objects that are defined by textural features, where the color information alone is not a good descriptor of the image. Fuzzy Segmentation is a region-growing segmentation algorithm that uses affinity functions in order to assign to each element in an image a grade of membership for each object (between 0 and 1). This work presents a modification of the Fuzzy Segmentation algorithm, for the purpose of improving the temporal and spatial complexity. The algorithm was adapted to segmenting color videos, treating them as 3D volume. In order to perform segmentation in videos, conventional color model or a hybrid model obtained by a method for choosing the best channels were used. The Fuzzy Segmentation algorithm was also applied to texture segmentation by using adaptive affinity functions defined for each object texture. Two types of affinity functions were used, one defined using the normal (or Gaussian) probability distribution and the other using the Skew Divergence. This latter, a Kullback-Leibler Divergence variation, is a measure of the difference between two probability distributions. Finally, the algorithm was tested in somes videos and also in texture mosaic images composed by images of the Brodatz album

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Considering a non-relativistic ideal gas, the standard foundations of kinetic theory are investigated in the context of non-gaussian statistical mechanics introduced by Kaniadakis. The new formalism is based on the generalization of the Boltzmann H-theorem and the deduction of Maxwells statistical distribution. The calculated power law distribution is parameterized through a parameter measuring the degree of non-gaussianity. In the limit = 0, the theory of gaussian Maxwell-Boltzmann distribution is recovered. Two physical applications of the non-gaussian effects have been considered. The first one, the -Doppler broadening of spectral lines from an excited gas is obtained from analytical expressions. The second one, a mathematical relationship between the entropic index and the stellar polytropic index is shown by using the thermodynamic formulation for self-gravitational systems

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Considering a quantum gas, the foundations of standard thermostatistics are investigated in the context of non-Gaussian statistical mechanics introduced by Tsallis and Kaniadakis. The new formalism is based on the following generalizations: i) Maxwell- Boltzmann-Gibbs entropy and ii) deduction of H-theorem. Based on this investigation, we calculate a new entropy using a generalization of combinatorial analysis based on two different methods of counting. The basic ingredients used in the H-theorem were: a generalized quantum entropy and a generalization of collisional term of Boltzmann equation. The power law distributions are parameterized by parameters q;, measuring the degree of non-Gaussianity of quantum gas. In the limit q

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the mechanisms responsible for the anomalous diffusion is the existence of long-range temporal correlations, for example, Fractional Brownian Motion and walk models according to Elephant memory and Alzheimer profiles, whereas in the latter two cases the walker can always "remember" of his first steps. The question to be elucidated, and the was the main motivation of our work, is if memory of the historic initial is condition for observation anomalous diffusion (in this case, superdiffusion). We give a conclusive answer, by studying a non-Markovian model in which the walkers memory of the past, at time t, is given by a Gaussian centered at time t=2 and standard deviation t which grows linearly as the walker ages. For large widths of we find that the model behaves similarly to the Elephant model; In the opposite limit (! 0), although the walker forget the early days, we observed similar results to the Alzheimer walk model, in particular the presence of amnestically induced persistence, characterized by certain log-periodic oscillations. We conclude that the memory of earlier times is not a necessary condition for the generating of superdiffusion nor the amnestically induced persistence and can appear even in profiles of memory that forgets the initial steps, like the Gausssian memory profile investigated here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In general, an inverse problem corresponds to find a value of an element x in a suitable vector space, given a vector y measuring it, in some sense. When we discretize the problem, it usually boils down to solve an equation system f(x) = y, where f : U Rm ! Rn represents the step function in any domain U of the appropriate Rm. As a general rule, we arrive to an ill-posed problem. The resolution of inverse problems has been widely researched along the last decades, because many problems in science and industry consist in determining unknowns that we try to know, by observing its effects under certain indirect measures. Our general subject of this dissertation is the choice of Tykhonov´s regulaziration parameter of a poorly conditioned linear problem, as we are going to discuss on chapter 1 of this dissertation, focusing on the three most popular methods in nowadays literature of the area. Our more specific focus in this dissertation consists in the simulations reported on chapter 2, aiming to compare the performance of the three methods in the recuperation of images measured with the Radon transform, perturbed by the addition of gaussian i.i.d. noise. We choosed a difference operator as regularizer of the problem. The contribution we try to make, in this dissertation, mainly consists on the discussion of numerical simulations we execute, as is exposed in Chapter 2. We understand that the meaning of this dissertation lays much more on the questions which it raises than on saying something definitive about the subject. Partly, for beeing based on numerical experiments with no new mathematical results associated to it, partly for being about numerical experiments made with a single operator. On the other hand, we got some observations which seemed to us interesting on the simulations performed, considered the literature of the area. In special, we highlight observations we resume, at the conclusion of this work, about the different vocations of methods like GCV and L-curve and, also, about the optimal parameters tendency observed in the L-curve method of grouping themselves in a small gap, strongly correlated with the behavior of the generalized singular value decomposition curve of the involved operators, under reasonably broad regularity conditions in the images to be recovered