920 resultados para Self-Validating Numerical Methods
Resumo:
In this paper we develop and apply methods for the spectral analysis of non-selfadjoint tridiagonal infinite and finite random matrices, and for the spectral analysis of analogous deterministic matrices which are pseudo-ergodic in the sense of E. B. Davies (Commun. Math. Phys. 216 (2001), 687–704). As a major application to illustrate our methods we focus on the “hopping sign model” introduced by J. Feinberg and A. Zee (Phys. Rev. E 59 (1999), 6433–6443), in which the main objects of study are random tridiagonal matrices which have zeros on the main diagonal and random ±1’s as the other entries. We explore the relationship between spectral sets in the finite and infinite matrix cases, and between the semi-infinite and bi-infinite matrix cases, for example showing that the numerical range and p-norm ε - pseudospectra (ε > 0, p ∈ [1,∞] ) of the random finite matrices converge almost surely to their infinite matrix counterparts, and that the finite matrix spectra are contained in the infinite matrix spectrum Σ. We also propose a sequence of inclusion sets for Σ which we show is convergent to Σ, with the nth element of the sequence computable by calculating smallest singular values of (large numbers of) n×n matrices. We propose similar convergent approximations for the 2-norm ε -pseudospectra of the infinite random matrices, these approximations sandwiching the infinite matrix pseudospectra from above and below.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
By means of self-consistent three-dimensional magnetohydrodynamics (MHD) numerical simulations, we analyze magnetized solar-like stellar winds and their dependence on the plasma-beta parameter (the ratio between thermal and magnetic energy densities). This is the first study to perform such analysis solving the fully ideal three-dimensional MHD equations. We adopt in our simulations a heating parameter described by gamma, which is responsible for the thermal acceleration of the wind. We analyze winds with polar magnetic field intensities ranging from 1 to 20 G. We show that the wind structure presents characteristics that are similar to the solar coronal wind. The steady-state magnetic field topology for all cases is similar, presenting a configuration of helmet streamer-type, with zones of closed field lines and open field lines coexisting. Higher magnetic field intensities lead to faster and hotter winds. For the maximum magnetic intensity simulated of 20 G and solar coronal base density, the wind velocity reaches values of similar to 1000 km s(-1) at r similar to 20r(0) and a maximum temperature of similar to 6 x 10(6) K at r similar to 6r(0). The increase of the field intensity generates a larger ""dead zone"" in the wind, i.e., the closed loops that inhibit matter to escape from latitudes lower than similar to 45 degrees extend farther away from the star. The Lorentz force leads naturally to a latitude-dependent wind. We show that by increasing the density and maintaining B(0) = 20 G the system recover back to slower and cooler winds. For a fixed gamma, we show that the key parameter in determining the wind velocity profile is the beta-parameter at the coronal base. Therefore, there is a group of magnetized flows that would present the same terminal velocity despite its thermal and magnetic energy densities, as long as the plasma-beta parameter is the same. This degeneracy, however, can be removed if we compare other physical parameters of the wind, such as the mass-loss rate. We analyze the influence of gamma in our results and we show that it is also important in determining the wind structure.
Resumo:
Background: Low maternal awareness of fetal movements is associated with negative birth outcomes. Knowledge regarding pregnant women's compliance with programs of systematic self-assessment of fetal movements is needed. The aim of this study was to investigate women's experiences using two different self-assessment methods for monitoring fetal movements and to determine if the women had a preference for one or the other method. Methods: Data were collected by a crossover trial; 40 healthy women with an uncomplicated full-term pregnancy counted the fetal movements according to a Count-to-ten method and assessed the character of the movements according to the Mindfetalness method. Each self-assessment was observed by a midwife and followed by a questionnaire. A total of 80 self-assessments was performed; 40 with each method. Results: Of the 40 women, only one did not find at least one method suitable. Twenty of the total of 39 reported a preference, 15 for the Mindfetalness method and five for the Count-to-ten method. All 39 said they felt calm, relaxed, mentally present and focused during the observations. Furthermore, the women described the observation of the movements as safe and reassuring and a moment for communication with their unborn baby. Conclusions: In the 80 assessments all but one of the women found one or both methods suitable for self-assessment of fetal movements and they felt comfortable during the assessments. More women preferred the Mindfetalness method compared to the count-to-ten method, than vice versa.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper, self-synchronization of four non-ideal exciters is examined via numerical simulation. The mathematical model consists of four unbalanced direct Current motors with limited power supply mounted on a flexible Structural frame support. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
In this work, the dynamic behavior of self-synchronization and synchronization through mechanical interactions between the nonlinear self-excited oscillating system and two non-ideal sources are examined by numerical simulations. The physical model of the system vibrating consists of a non-linear spring of Duffing type and a nonlinear damping described by Rayleigh's term. This system is additional forced by two unbalanced identical direct current motors with limited power (non-ideal excitations). The present work mathematically implements the parametric excitation described by two periodically changing stiffness of Mathieu type that are switched on/off. Copyright © 2005 by ASME.
Resumo:
This paper presents a comparative analysis between the experimental characterization and the numerical simulation results for a three-dimensional FCC photonic crystal (PhC) based on a self-assembly synthesis of monodispersive latex spheres. Specifically, experimental optical characterization, by means of reflectance measurements under variable angles over the lattice plane family [1,1, 1], are compared to theoretical calculations based on the Finite Di®erence Time Domain (FDTD) method, in order to investigate the correlation between theoretical predictions and experimental data. The goal is to highlight the influence of crystal defects on the achieved performance.
Resumo:
This paper deals with topology optimization in plane elastic-linear problems considering the influence of the self weight in efforts in structural elements. For this purpose it is used a numerical technique called SESO (Smooth ESO), which is based on the procedure for progressive decrease of the inefficient stiffness element contribution at lower stresses until he has no more influence. The SESO is applied with the finite element method and is utilized a triangular finite element and high order. This paper extends the technique SESO for application its self weight where the program, in computing the volume and specific weight, automatically generates a concentrated equivalent force to each node of the element. The evaluation is finalized with the definition of a model of strut-and-tie resulting in regions of stress concentration. Examples are presented with optimum topology structures obtaining optimal settings. (C) 2012 CIMNE (Universitat Politecnica de Catalunya). Published by Elsevier Espana, S.L.U. All rights reserved.
Resumo:
A self-learning simulated annealing algorithm is developed by combining the characteristics of simulated annealing and domain elimination methods. The algorithm is validated by using a standard mathematical function and by optimizing the end region of a practical power transformer. The numerical results show that the CPU time required by the proposed method is about one third of that using conventional simulated annealing algorithm.
Resumo:
In this work, different methods to estimate the value of thin film residual stresses using instrumented indentation data were analyzed. This study considered procedures proposed in the literature, as well as a modification on one of these methods and a new approach based on the effect of residual stress on the value of hardness calculated via the Oliver and Pharr method. The analysis of these methods was centered on an axisymmetric two-dimensional finite element model, which was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. Simulations were conducted varying the level of film residual stress, film strain hardening exponent, film yield strength, and film Poisson's ratio. Different ratios of maximum penetration depth h(max) over film thickness t were also considered, including h/t = 0.04, for which the contribution of the substrate in the mechanical response of the system is not significant. Residual stresses were then calculated following the procedures mentioned above and compared with the values used as input in the numerical simulations. In general, results indicate the difference that each method provides with respect to the input values depends on the conditions studied. The method by Suresh and Giannakopoulos consistently overestimated the values when stresses were compressive. The method provided by Wang et al. has shown less dependence on h/t than the others.
Resumo:
In the past few decades detailed observations of radio and X-ray emission from massive binary systems revealed a whole new physics present in such systems. Both thermal and non-thermal components of this emission indicate that most of the radiation at these bands originates in shocks. O and B-type stars and WolfRayet (WR) stars present supersonic and massive winds that, when colliding, emit largely due to the freefree radiation. The non-thermal radio and X-ray emissions are due to synchrotron and inverse Compton processes, respectively. In this case, magnetic fields are expected to play an important role in the emission distribution. In the past few years the modelling of the freefree and synchrotron emissions from massive binary systems have been based on purely hydrodynamical simulations, and ad hoc assumptions regarding the distribution of magnetic energy and the field geometry. In this work we provide the first full magnetohydrodynamic numerical simulations of windwind collision in massive binary systems. We study the freefree emission characterizing its dependence on the stellar and orbital parameters. We also study self-consistently the evolution of the magnetic field at the shock region, obtaining also the synchrotron energy distribution integrated along different lines of sight. We show that the magnetic field in the shocks is larger than that obtained when the proportionality between B and the plasma density is assumed. Also, we show that the role of the synchrotron emission relative to the total radio emission has been underestimated.
Resumo:
Programa de doctorado: Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería Instituto Universitario (SIANI)
Resumo:
The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.