998 resultados para improving convergence


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Solution of generalized eigenproblem, K phi = lambda M phi, by the classical inverse iteration method exhibits slow convergence for some eigenproblems. In this paper, a modified inverse iteration algorithm is presented for improving the convergence rate. At every iteration, an optimal linear combination of the latest and the preceding iteration vectors is used as the input vector for the next iteration. The effectiveness of the proposed algorithm is demonstrated for three typical eigenproblems, i.e. eigenproblems with distinct, close and repeated eigenvalues. The algorithm yields 29, 96 and 23% savings in computational time, respectively, for these problems. The algorithm is simple and easy to implement, and this renders the algorithm even more attractive.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its `pure` counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/similar to egbirgin/tango/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficient and effective urban management systems for Ubiquitous Eco Cities require having intelligent and integrated management mechanisms. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision making mechanism and necessary infrastructure and technologies. In Ubiquitous Eco Cities telecommunication technologies play an important role in monitoring and managing activities over wired, wireless or fibre-optic networks. Particularly technology convergence creates new ways in which the information and telecommunication technologies are used and formed the back bone or urban management systems. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones and provides opportunities in the management of Ubiquitous Eco Cities. This research paper discusses the recent developments in telecommunication networks and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities and how this technological shift is likely to be beneficial in improving the quality of life and place of residents, workers and visitors. The research paper reports and introduces recent approaches on urban management systems, such as intelligent urban management systems, that are suitable for Ubiquitous Eco Cities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A successful urban management system for a Ubiquitous Eco City requires an integrated approach. This integration includes bringing together economic, socio-cultural and urban development with a well orchestrated, transparent and open decision making mechanism and necessary infrastructure and technologies. Rapidly developing information and telecommunication technologies and their platforms in the late 20th Century improves urban management and enhances the quality of life and place. Telecommunication technologies provide an important base for monitoring and managing activities over wired, wireless or fibre-optic networks. Particularly technology convergence creates new ways in which the information and telecommunication technologies are used. The 21st Century is an era where information has converged, in which people are able to access a variety of services, including internet and location based services, through multi-functional devices such as mobile phones and provides opportunities in the management of Ubiquitous Eco Cities. This paper discusses the recent developments in telecommunication networks and trends in convergence technologies and their implications on the management of Ubiquitous Eco Cities and how this technological shift is likely to be beneficial in improving the quality of life and place. The paper also introduces recent approaches on urban management systems, such as intelligent urban management systems, that are suitable for Ubiquitous Eco Cities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considering some predictive mechanisms, we show that ultrafast average-consensus can be achieved in networks of interconnected agents. More specifically, by predicting the dynamics of the network several steps ahead and using this information in the design of the consensus protocol of each agent, drastic improvements can be achieved in terms of the speed of consensus convergence, without changing the topology of the network. Moreover, using these predictive mechanisms, the range of sampling periods leading to consensus convergence is greatly expanded compared with the routine consensus protocol. This study provides a mathematical basis for the idea that some predictive mechanisms exist in widely-spread biological swarms, flocks, and networks. From the industrial engineering point of view, inclusion of an efficient predictive mechanism allows for a significant increase in the speed of consensus convergence and also a reduction of the communication energy required to achieve a predefined consensus performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, research on exploring the potential of several popular equalization techniques while overcoming their disadvantages has been conducted. First, extensive literature survey on equalization is conducted. The focus has been placed on several popular linear equalization algorithm such as the conventional least-mean-square (LMS) algorithm, the recursive least squares (RLS) algorithm, the fi1tered-X LMS algorithm and their development. The approach in analysing the performance of the filtered-X LMS Algorithm, a heuristic method based on linear time-invariant operator theory is provided to analyse the robust perfonnance of the filtered-X structure. It indicates that the extra filter could enhance the stability margin of the corresponding non filtered X structure. To overcome the slow convergence problem while keeping the simplicity of the LMS based algorithms, an H2 optimal initialization is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the experiment results of three adaptive equalization algorithms: least-mean-square (LMS) algorithm, discrete cosine transform-least mean square (DCT-LMS) algorithm, and recursive least square (RLS) algorithm. Based on the experiments, we obtained that the convergence rate of LMS is slow; the convergence rate of RLS is great faster while the computational price is expensive; the performance of that two parameters of DCT-LMS are between the previous two algorithms, but still not good enough. Therefore we will propose an algorithm based on H2 in a coming paper to solve the problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, forty years since its birth, the Caribbean integration has reached its limit.1 2 Consequently, there is urgent need to respond to the current realities and emerging global trends — which require greater engagement from the public, students, academics and policymakers — in moving the Caribbean Community towards a new trajectory of Caribbean convergence. The immediate concern is to devise ways of improving the convergence process among Latin American and Caribbean countries. This convergence process will have to be sensitive to both current and emerging global dynamics. This paper presents the roadmap of a new trajectory towards Caribbean convergence, sensitive to both current and emergent regional and global trends. It begins in Section I by identifying the emerging international political and economic trends that provide a backdrop against which the discussion on Caribbean convergence is squarely placed. Section II discusses the need for a new strategy of convergence, and provides the conceptual framework of Caribbean convergence. Section III spells out the pillars, strategies and delivery mechanisms of Caribbean convergence, and highlights the role of Trinidad and Tobago in this process. The paper concludes by pointing out the urgent need for a regional synergy of economic logic and political logic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using data from the Current Population Survey, we examine recent trends in the relative economic status of black men. Our findings point to gains in the relative wages of black men (compared to whites) during the 1990s, especially among younger workers. In 1989, the average black male worker (experienced or not) earned about 69 percent as much per week as the average white male worker. In 2001, the average younger black worker was earning about 86% percent as much as an equally experienced white male; black males at all experience levels earned 72 percent as much as the average white in 2001. Greater occupational diversity and a reduction in unobserved skill differences and/or labor market discrimination explain much of the trend. For both younger and older workers, general wage inequality tempered the rate of wage convergence between blacks and whites during the 1990s, although the effects were less pronounced than during the 1980s.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The acquisition of the information system technologies using the services of an external supplier could be the the best options to reduce the implementation and maintenance cost of software solutions, and allows a company to improve the efficient use of its resources. The focus of this paper is to outline a methodology structure for the software acquisition management. The methodology proposed in this paper is the result of the study and the convergence of the weakness and strengths of some models (CMMI, SA-CMM, ISO/IEC TR 15504, COBIT, and ITIL) that include the software acquisition process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show a cluster based routing protocol in order to improve the convergence of the clusters and of the network it is proposed to use a backup cluster head. The use of a event discrete simulator is used for the implementation and the simulation of a hierarchical routing protocol called the Backup Cluster Head Protocol (BCHP). Finally it is shown that the BCHP protocol improves the convergence and availability of the network through a comparative analysis with the Ad Hoc On Demand Distance Vector (AODV)[1] routing protocol and Cluster Based Routing Protocol (CBRP)[2]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In spite of increasing globalization around the world, the effects of international trade on economic growth are not very clear. I consider an endogenous economic growth model in an open economy with the Home Market Effect (HME) and non-homothetic preferences in order to identify some determinants of the different results in this relationship. The model shows how trade between similar countries leads to convergence in economic growth when knowledge spillovers are present, while trade between very asymmetric countries produces divergence and may become trade in a poverty or growth trap. The results for welfare move in the same direction as economic growth since convergence implies increases in welfare for both countries, while divergence leads to increases in welfare for the largest country and the opposite for its commercial partner in the absence of knowledge spillovers. International trade does not implicate greater welfare as is usual in a static context under CES preferences.