876 resultados para Almost Convergence


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the one-unit gradient ICA algorithm derived from the kurtosis function. By carefully studying properties of the stationary points of the discrete-time one-unit gradient ICA algorithm, with suitable condition on the learning rate, convergence can be proved. The condition on the learning rate helps alleviate the guesswork that accompanies the problem of choosing suitable learning rate in practical computation. These results may be useful to extract independent source signals on-line.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This introduction considers reasons why public policies might be expected to converge between Britain and Germany, arguing that the inter-related forces of globalisation, Europeanisation, policy transfer (in various guises) and the election of centre-left governance in 1997 and 1998 could be expected to lead to such convergence. It then outlines important reasons why such convergence may not occur, due to the radically different institutional settings, as well as 'path dependence' and the resilience of established institutions all playing a role in continuing divergence in a number of important areas of public policy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper shows that the Italian economy has two long-run equilibria, which are due to the different level of industrialization between the centre-north and the south of the country. These equilibria converge until 1971 but diverge afterwards; the end of the convergence process coincides with the slowing down of Italy's industrialization policy in the South. In this paper we argue that to address this problem effectively, an economic policy completely different from that in place in needed. However, such a policy is unlikely to be implemented given the scarcity of resources and the short run nature of the political cycle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We assessed summation of contrast across eyes and area at detection threshold ( C t). Stimuli were sine-wave gratings (2.5 c/deg) spatially modulated by cosine- and anticosine-phase raised plaids (0.5 c/deg components oriented at ±45°). When presented dichoptically the signal regions were interdigitated across eyes but produced a smooth continuous grating following their linear binocular sum. The average summation ratio ( C t1/([ C t1+2]) for this stimulus pair was 1.64 (4.3 dB). This was only slightly less than the binocular summation found for the same patch type presented to both eyes, and the area summation found for the two different patch types presented to the same eye. We considered 192 model architectures containing each of the following four elements in all possible orders: (i) linear summation or a MAX operator across eyes, (ii) linear summation or a MAX operator across area, (iii) linear or accelerating contrast transduction, and (iv) additive Gaussian, stochastic noise. Formal equivalences reduced this to 62 different models. The most successful four-element model was: linear summation across eyes followed by nonlinear contrast transduction, linear summation across area, and late noise. Model performance was enhanced when additional nonlinearities were placed before binocular summation and after area summation. The implications for models of probability summation and uncertainty are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-agent systems are complex systems comprised of multiple intelligent agents that act either independently or in cooperation with one another. Agent-based modelling is a method for studying complex systems like economies, societies, ecologies etc. Due to their complexity, very often mathematical analysis is limited in its ability to analyse such systems. In this case, agent-based modelling offers a practical, constructive method of analysis. The objective of this book is to shed light on some emergent properties of multi-agent systems. The authors focus their investigation on the effect of knowledge exchange on the convergence of complex, multi-agent systems.