20 resultados para Superlinear and Semi–Superlinear Convergence
em Aston University Research Archive
Resumo:
Recent investigations into cross-country convergence follow Mankiw, Romer, and Weil (1992) in using a log-linear approximation to the Swan-Solow growth model to specify regressions. These studies tend to assume a common and exogenous technology. In contrast, the technology catch-up literature endogenises the growth of technology. The use of capital stock data renders the approximations and over-identification of the Mankiw model unnecessary and enables us, using dynamic panel estimation, to estimate the separate contributions of diminishing returns and technology transfer to the rate of conditional convergence. We find that both effects are important.
Resumo:
Purpose: To investigate the accommodation-convergence relationship during the incipient phase of presbyopia. The study aimed to differentiate between the current theories of presbyopia and to explore the mechanisms by which the oculomotor system compensates for the change in the accommodation-convergence relationship contingent on a declining amplitude of accommodation. Methods: Using a Canon R-1 open-view autorefractor and a haploscope device, measurements were made of the stimulus and response accommodative convergence/accommodation ratios and the convergence accommodation/convergence ratio of 28 subjects aged 35-45 years at the commencement of the study. Amplitude of accommodation was assessed using a push-down technique. The measurements were repeated at 4-monthly intervals over a 2-year period. Results: The results showed that with the decline in the amplitude of accommodation there is an increase in the accommodative convergence response per unit of accommodative response and a decrease in the convergence accommodation response per unit of convergence. Conclusions: The results of this study fail to support the Hess-Gullstrand theory of presbyopia in that the ciliary muscle effort required to produce a unit change in accommodation increases, rather than stays constant, with age. Data show that the near vision response is limited to the maximum vergence response that can be tolerated and, despite being within the amplitude of accommodation, a stimulus may still appear blurred because the vergence component determines the proportion of available accommodation utilised during near vision.
Resumo:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs.Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
Resumo:
The increasing adoption of international accounting standards and global convergence of accounting regulations is frequently heralded as serving to reduce diversity in financial reporting practice. In a process said to be driven in large part by the interests of international business and global financial markets, one might expect the greatest degree of convergence to be found amongst the world’s largest multinational financial corporations. This paper challenges such claims and presumptions. Its content analysis of longitudinal data for the period 2000-2006 reveals substantial, on going diversity in the market risk disclosure practices, both numerical and narrative, of the world’s top-25 banks. The significance of such findings is reinforced by the sheer scale of the banking sector’s risk exposures that have been subsequently revealed in the current global financial crisis. The variations in disclosure practices documented in the paper apply both across and within national boundaries, leading to a firm conclusion that, at least in terms of market risk reporting, progress towards international harmonisation remains rather more apparent than real.
Resumo:
On-line learning is examined for the radial basis function network, an important and practical type of neural network. The evolution of generalization error is calculated within a framework which allows the phenomena of the learning process, such as the specialization of the hidden units, to be analyzed. The distinct stages of training are elucidated, and the role of the learning rate described. The three most important stages of training, the symmetric phase, the symmetry-breaking phase, and the convergence phase, are analyzed in detail; the convergence phase analysis allows derivation of maximal and optimal learning rates. As well as finding the evolution of the mean system parameters, the variances of these parameters are derived and shown to be typically small. Finally, the analytic results are strongly confirmed by simulations.
Resumo:
An adaptive back-propagation algorithm parameterized by an inverse temperature 1/T is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, we analyse these learning algorithms in both the symmetric and the convergence phase for finite learning rates in the case of uncorrelated teachers of similar but arbitrary length T. These analyses show that adaptive back-propagation results generally in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Resumo:
Over the full visual field, contrast sensitivity is fairly well described by a linear decline in log sensitivity as a function of eccentricity (expressed in grating cycles). However, many psychophysical studies of spatial visual function concentrate on the central ±4.5 deg (or so) of the visual field. As the details of the variation in sensitivity have not been well documented in this region we did so for small patches of target contrast at several spatial frequencies (0.7–4 c/deg), meridians (horizontal, vertical, and oblique), orientations (horizontal, vertical, and oblique), and eccentricities (0–18 cycles). To reduce the potential effects of stimulus uncertainty, circular markers surrounded the targets. Our analysis shows that the decline in binocular log sensitivity within the central visual field is bilinear: The initial decline is steep, whereas the later decline is shallow and much closer to the classical results. The bilinear decline was approximately symmetrical in the horizontal meridian and declined most steeply in the superior visual field. Further analyses showed our results to be scale-invariant and that this property could not be predicted from cone densities. We used the results from the cardinal meridians to radially interpolate an attenuation surface with the shape of a witch's hat that provided good predictions for the results from the oblique meridians. The witch's hat provides a convenient starting point from which to build models of contrast sensitivity, including those designed to investigate signal summation and neuronal convergence of the image contrast signal. Finally, we provide Matlab code for constructing the witch's hat.
Resumo:
Optimal paths connecting randomly selected network nodes and fixed routers are studied analytically in the presence of a nonlinear overlap cost that penalizes congestion. Routing becomes more difficult as the number of selected nodes increases and exhibits ergodicity breaking in the case of multiple routers. The ground state of such systems reveals nonmonotonic complex behaviors in average path length and algorithmic convergence, depending on the network topology, and densities of communicating nodes and routers. A distributed linearly scalable routing algorithm is also devised. © 2012 American Physical Society.
Resumo:
This introduction considers reasons why public policies might be expected to converge between Britain and Germany, arguing that the inter-related forces of globalisation, Europeanisation, policy transfer (in various guises) and the election of centre-left governance in 1997 and 1998 could be expected to lead to such convergence. It then outlines important reasons why such convergence may not occur, due to the radically different institutional settings, as well as 'path dependence' and the resilience of established institutions all playing a role in continuing divergence in a number of important areas of public policy.
Resumo:
This paper shows that the Italian economy has two long-run equilibria, which are due to the different level of industrialization between the centre-north and the south of the country. These equilibria converge until 1971 but diverge afterwards; the end of the convergence process coincides with the slowing down of Italy's industrialization policy in the South. In this paper we argue that to address this problem effectively, an economic policy completely different from that in place in needed. However, such a policy is unlikely to be implemented given the scarcity of resources and the short run nature of the political cycle.
Resumo:
We present an implementation of the domain-theoretic Picard method for solving initial value problems (IVPs) introduced by Edalat and Pattinson [1]. Compared to Edalat and Pattinson's implementation, our algorithm uses a more efficient arithmetic based on an arbitrary precision floating-point library. Despite the additional overestimations due to floating-point rounding, we obtain a similar bound on the convergence rate of the produced approximations. Moreover, our convergence analysis is detailed enough to allow a static optimisation in the growth of the precision used in successive Picard iterations. Such optimisation greatly improves the efficiency of the solving process. Although a similar optimisation could be performed dynamically without our analysis, a static one gives us a significant advantage: we are able to predict the time it will take the solver to obtain an approximation of a certain (arbitrarily high) quality.
Resumo:
We assessed summation of contrast across eyes and area at detection threshold ( C t). Stimuli were sine-wave gratings (2.5 c/deg) spatially modulated by cosine- and anticosine-phase raised plaids (0.5 c/deg components oriented at ±45°). When presented dichoptically the signal regions were interdigitated across eyes but produced a smooth continuous grating following their linear binocular sum. The average summation ratio ( C t1/([ C t1+2]) for this stimulus pair was 1.64 (4.3 dB). This was only slightly less than the binocular summation found for the same patch type presented to both eyes, and the area summation found for the two different patch types presented to the same eye. We considered 192 model architectures containing each of the following four elements in all possible orders: (i) linear summation or a MAX operator across eyes, (ii) linear summation or a MAX operator across area, (iii) linear or accelerating contrast transduction, and (iv) additive Gaussian, stochastic noise. Formal equivalences reduced this to 62 different models. The most successful four-element model was: linear summation across eyes followed by nonlinear contrast transduction, linear summation across area, and late noise. Model performance was enhanced when additional nonlinearities were placed before binocular summation and after area summation. The implications for models of probability summation and uncertainty are discussed.
Resumo:
How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.