980 resultados para CONVERGENCE RATE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Pós-graduação em Engenharia Civil - FEIS
Resumo:
Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
A path integral simulation algorithm which includes a higher-order Trotter approximation (HOA)is analyzed and compared to an approach which includes the correct quantum mechanical pair interaction (effective Propagator (EPr)). It is found that the HOA algorithmconverges to the quantum limit with increasing Trotter number P as P^{-4}, while the EPr algorithm converges as P^{-2}.The convergence rate of the HOA algorithm is analyzed for various physical systemssuch as a harmonic chain,a particle in a double-well potential, gaseous argon, gaseous helium and crystalline argon. A new expression for the estimator for the pair correlation function in the HOA algorithm is derived. A new path integral algorithm, the hybrid algorithm, is developed.It combines an exact treatment of the quadratic part of the Hamiltonian and thehigher-order Trotter expansion techniques.For the discrete quantum sine-Gordon chain (DQSGC), it is shown that this algorithm works more efficiently than all other improved path integral algorithms discussed in this work. The new simulation techniques developed in this work allow the analysis of theDQSGC and disordered model systems in the highly quantum mechanical regime using path integral molecular dynamics (PIMD)and adiabatic centroid path integral molecular dynamics (ACPIMD).The ground state phonon dispersion relation is calculated for the DQSGC by the ACPIMD method.It is found that the excitation gap at zero wave vector is reduced by quantum fluctuations. Two different phases exist: One phase with a finite excitation gap at zero wave vector, and a gapless phase where the excitation gap vanishes.The reaction of the DQSGC to an external driving force is analyzed at T=0.In the gapless phase the system creeps if a small force is applied, and in the phase with a gap the system is pinned. At a critical force, the systems undergo a depinning transition in both phases and flow is induced. The analysis of the DQSGC is extended to models with disordered substrate potentials. Three different cases are analyzed: Disordered substrate potentials with roughness exponent H=0, H=1/2,and a model with disordered bond length. For all models, the ground state phonon dispersion relation is calculated.
Resumo:
This Ph.D thesis focuses on iterative regularization methods for regularizing linear and nonlinear ill-posed problems. Regarding linear problems, three new stopping rules for the Conjugate Gradient method applied to the normal equations are proposed and tested in many numerical simulations, including some tomographic images reconstruction problems. Regarding nonlinear problems, convergence and convergence rate results are provided for a Newton-type method with a modified version of Landweber iteration as an inner iteration in a Banach space setting.
Resumo:
Marginal generalized linear models can be used for clustered and longitudinal data by fitting a model as if the data were independent and using an empirical estimator of parameter standard errors. We extend this approach to data where the number of observations correlated with a given one grows with sample size and show that parameter estimates are consistent and asymptotically Normal with a slower convergence rate than for independent data, and that an information sandwich variance estimator is consistent. We present two problems that motivated this work, the modelling of patterns of HIV genetic variation and the behavior of clustered data estimators when clusters are large.
Resumo:
With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipment. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.
Resumo:
We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.
Resumo:
Ocean Drilling Program (ODP) Leg 134 was located in the central part of the New Hebrides Island Arc, in the Southwest Pacific. Here the d'Entrecasteaux Zone of ridges, the North d'Entrecasteaux Ridge and South d'Entrecasteaux Chain, is colliding with the arc. The region has a Neogene history of subduction polarity reversal, ridge-arc collision, and back-arc spreading. The reasons for drilling in this region included the following: (1) to determine the differences in the style and time scale of deformation associated with the two ridge-like features (a fairly continuous ridge and an irregularly topographic seamount chain) that are colliding with the central New Hebrides Island Arc; (2) to document the evolution of the magmatic arc in relation to the collision process and possible Neogene reversal of subduction; and (3) to understand the process of dewatering of a small accretionary wedge associated with ridge collision and subduction. Seven sites were occupied during the leg, five (Sites 827-831) were located in the d'Entrecasteaux Zone where collision is active. Three sites (Sites 827, 828, and 829) were located where the North d'Entrecasteaux Ridge is colliding, whereas two sites (Sites 830 and 831) were located in the South d'Entrecasteaux Chain collision zone. Sites 828 (on North d'Entrecasteaux Ridge) and 831 (on Bougainville Guyot) were located on the Pacific Plate, whereas all other sites were located on a microplate of the North Fiji Basin. Two sites (Sites 832 and 831) were located in the intra-arc North Aoba Basin. Results of Leg 134 drilling showed that forearc deformation associated with the North d'Entrecasteaux Ridge and South d'Entrecasteaux Chain collision is distinct and different. The d'Entrecasteaux Zone is an Eocene subduction/obduction complex with a distinct submerged island arc. Collision and subduction of the North d'Entrecasteaux Ridge results in off scraping of ridge material and plating of the forearc with thrust sheets (flakes) as well as distinct forearc uplift. Some offscraped sedimentary rocks and surficial volcanic basement rocks of the North d'Entrecasteaux Ridge are being underplated to the New Hebrides Island forearc. In contrast, the South d'Entrecasteaux Chain is a serrated feature resulting in intermittent collision and subduction of seamounts. The collision of the Bougainville Guyot has indented the forearc and appears to be causing shortening through thrust faulting. In addition, we found that the Quaternary relative convergence rate between the New Hebrides Island Arc at the latitude of Espiritu Santo Island is as high as 14 to 16 cm/yr. The northward migration rate of the d'Entrecasteaux Zone was found the be ~2 to 4 cm/yr based on the newly determined Quaternary relative convergence rate. Using these rates we established the timing of initial d'Entrecasteaux Zone collision with the arc at ~3 Ma at the latitude of Epi Island and fixed the impact of the North d'Entrecasteaux Ridge upon Espiritu Santo Island at early Pleistocene (between 1.89 and 1.58 Ma). Dewatering is occurring in the North d'Entrecasteaux Ridge accretionary wedge, and the wedge is dryer than other previously studied accretionary wedges, such as Barbados. This could be the result of less sediment being subducted at the New Hebrides compared to the Barbados.
Resumo:
Subducted sediments play an important role in arc magmatism and crust-mantle recycling. Models of continental growth, continental composition, convergent margin magmatism and mantle heterogeneity all require a better understanding of the mass and chemical fluxes associated with subducting sediments. We have evaluated subducting sediments on a global basis in order to better define their chemical systematics and to determine both regional and global average compositions. We then use these compositions to assess the importance of sediments to arc volcanism and crust-mantle recycling, and to re-evaluate the chemical composition of the continental crust. The large variations in the chemical composition of marine sediments are for the most part linked to the main lithological constituents. The alkali elements (K, Rb and Cs) and high field strength elements (Ti, Nb, Hf, Zr) are closely linked to the detrital phase in marine sediments; Th is largely detrital but may be enriched in the hydrogenous Fe-Mn component of sediments; REE patterns are largely continental, but abundances are closely linked to fish debris phosphate; U is mostly detrital, but also dependent on the supply and burial rate of organic matter; Ba is linked to both biogenic barite and hydrothermal components; Sr is linked to carbonate phases. Thus, the important geochemical tracers follow the lithology of the sediments. Sediment lithologies are controlled in turn by a small number of factors: proximity of detrital sources (volcanic and continental); biological productivity and preservation of carbonate and opal; and sedimentation rate. Because of the link with lithology and the wealth of lithological data routinely collected for ODP and DSDP drill cores, bulk geochemical averages can be calculated to better than 30% for most elements from fewer than ten chemical analyses for a typical drill core (100-1000 m). Combining the geochemical systematics with convergence rate and other parameters permits calculation of regional compositional fluxes for subducting sediment. These regional fluxes can be compared to the compositions of arc volcanics to asses the importance of sediment subduction to arc volcanism. For the 70% of the trenches worldwide where estimates can be made, the regional fluxes also provide the basis for a global subducting sediment (GLOSS) composition and flux. GLOSS is dominated by terrigenous material (76 wt% terrigenous, 7 wt% calcium carbonate, 10 wt% opal, 7 wt% mineral-bound H2O+), and therefore similar to upper continental crust (UCC) in composition. Exceptions include enrichment in Ba, Mn and the middle and heavy REE, and depletions in detrital elements diluted by biogenic material (alkalis, Th, Zr, Hf). Sr and Pb are identical in GLOSS and UCC as a result of a balance between dilution and enrichment by marine phases. GLOSS and the systematics of marine sediments provide an independent approach to the composition of the upper continental crust for detrital elements. Significant discrepancies of up to a factor of two exist between the marine sediment data and current upper crustal estimates for Cs, Nb, Ta and Ti. Suggested revisions to UCC include Cs (7.3 ppm), Nb (13.7 ppm), Ta (0.96 ppm) and TiO2 (0.76 wt%). These revisions affect recent bulk continental crust estimates for La/Nb and U/Nb, and lead to an even greater contrast between the continents and mantle for these important trace element ratios. GLOSS and the regional sediment data also provide new insights into the mantle sources of oceanic basalts. The classical geochemical distinction between 'pelagic' and 'terrigenous' sediment sources is not valid and needs to be replaced by a more comprehensive understanding of the compositional variations in complete sedimentary columns. In addition, isotopic arguments based on surface sediments alone can lead to erroneous conclusions. Specifically, the Nd/Hf ratio of GLOSS relaxes considerably the severe constraints on the amount of sediment recycling into the mantle based on earlier estimates from surface sediment compositions.
Resumo:
In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results.
Resumo:
We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.