65 resultados para Superlinear and Semi–Superlinear Convergence
Resumo:
Douglass North is a pivotal figure in the development of the 'new' economic history as well as the 'new' institutional economics. However, the relationship between these two aspects of his thinking remains undeveloped in previous critical assessments of North's work. The relationship is clarified here. The evidence presented indicates that three distinct phases can be distinguished in his writings between the 1950s and the 2000s. The paper relates these changing views to the shifting mainstream within economics and the effects that this shift has in turn had on economic history research. Economic history has adapted to economic research by abandoning some practices associated with the earlier cliometric literature. Furthermore, North is unique to the extent that his recent writings represent something of a convergence with 'old' institutionalism. © 2010 Taylor & Francis.
Resumo:
We present a general method to construct a set of local rectilinear vibrational coordinates for a nonlinear molecule whose reference structure does not necessarily correspond to a stationary point of the potential-energy surface. We show both analytically and with a numerical example that the vibrational coordinates satisfy Eckart's conditions. In addition, we find that the Watson Hamiltonian provides a fairly robust description even of highly excited vibrational states of triatomic molecules, except for a few states of large amplitude motion sampling the singular region of the Hamiltonian. These states can be identified through slow convergence.
Resumo:
There is a growing literature examining the impact of research on informing policy, and of research and policy on practice. Research and policy do not have the same types of impact on practice but can be evaluated using similar approaches. Sometimes the literature provides a platform for methodological debate but mostly it is concerned with how research can link to improvements in the process and outcomes of education, how it can promote innovative policies and practice, and how it may be successfully disseminated. Whether research-informed or research-based, policy and its implementation is often assessed on such 'hard' indicators of impact as changes in the number of students gaining five or more A to C grades in national examinations or a percentage fall in the number of exclusions in inner city schools. Such measures are necessarily crude, with large samples smoothing out errors and disguising instances of significant success or failure. Even when 'measurable' in such a fashion, however, the impact of any educational change or intervention may require a period of years to become observable. This paper considers circumstances in which short-term change may be implausible or difficult to observe. It explores how impact is currently theorized and researched and promotes the concept of 'soft' indicators of impact in circumstances in which the pursuit of conventional quantitative and qualitative evidence is rendered impractical within a reasonable cost and timeframe. Such indicators are characterized by their avowedly subjective, anecdotal and impressionistic provenance and have particular importance in the context of complex community education issues where the assessment of any impact often faces considerable problems of access. These indicators include the testimonies of those on whom the research intervention or policy focuses (for example, students, adult learners), the formative effects that are often reported (for example, by head teachers, community leaders) and media coverage. The collation and convergence of a wide variety of soft indicators (Where there is smoke …) is argued to offer a credible means of identifying subtle processes that are often neglected as evidence of potential and actual impact (… there is fire).
Resumo:
A method is proposed to accelerate the evaluation of the Green's function of an infinite double periodic array of thin wire antennas. The method is based on the expansion of the Green's function into series corresponding to the propagating and evanescent waves and the use of Poisson and Kummer transformations enhanced with the analytic summation of the slowly convergent asymptotic terms. Unlike existing techniques the procedure reported here provides uniform convergence regardless of the geometrical parameters of the problem or plane wave excitation wavelength. In addition, it is numerically stable and does not require numerical integration or internal tuning parameters, since all necessary series are directly calculated in terms of analytical functions. This means that for nonlinear problem scenarios that the algorithm can be deployed without run time intervention or recursive adjustment within a harmonic balance engine. Numerical examples are provided to illustrate the efficiency and accuracy of the developed approach as compared with the Ewald method for which these classes of problems requires run time splitting parameter adaptation.
Resumo:
A novel most significant digit first CORDIC architecture is presented that is suitable for the VLSI design of systolic array processor cells for performing QR decomposition. This is based on an on-line CORDIC algorithm with a constant scale factor and a latency independent of the wordlength. This has been derived through the extension of previously published CORDIC algorithms. It is shown that simplifying the calculation of convergence bounds also greatly simplifies the derivation of suitable VLSI architectures. Design studies, based on a 0.35-µ CMOS standard cell process, indicate that 20 such QR processor cells operating at rates suitable for radar beamfoming can be readily accommodated on a single chip.
Resumo:
This paper considers a Q-ary orthogonal direct-sequence code-division multiple-access (DS-CDMA) system with high-rate space-time linear dispersion codes (LDCs) in time-varying Rayleigh fading multiple-input-multiple-output (MIMO) channels. We propose a joint multiuser detection, LDC decoding, Q-ary demodulation, and channel-decoding algorithm and apply the turbo processing principle to improve system performance in an iterative fashion. The proposed iterative scheme demonstrates faster convergence and superior performance compared with the V-BLAST-based DS-CDMA system and is shown to approach the single-user performance bound. We also show that the CDMA system is able to exploit the time diversity offered by the LDCS in rapid-fading channels.
Resumo:
An optimal search theory, the so-called Levy-flight foraging hypothesis(1), predicts that predators should adopt search strategies known as Levy flights where prey is sparse and distributed unpredictably, but that Brownian movement is sufficiently efficient for locating abundant prey(2-4). Empirical studies have generated controversy because the accuracy of statistical methods that have been used to identify Levy behaviour has recently been questioned(5,6). Consequently, whether foragers exhibit Levy flights in the wild remains unclear. Crucially, moreover, it has not been tested whether observed movement patterns across natural landscapes having different expected resource distributions conform to the theory's central predictions. Here we use maximum-likelihood methods to test for Levy patterns in relation to environmental gradients in the largest animal movement data set assembled for this purpose. Strong support was found for Levy search patterns across 14 species of open-ocean predatory fish (sharks, tuna, billfish and ocean sunfish), with some individuals switching between Levy and Brownian movement as they traversed different habitat types. We tested the spatial occurrence of these two principal patterns and found Levy behaviour to be associated with less productive waters (sparser prey) and Brownian movements to be associated with productive shelf or convergence-front habitats (abundant prey). These results are consistent with the Levy-flight foraging hypothesis(1,7), supporting the contention(8,9) that organism search strategies naturally evolved in such a way that they exploit optimal Levy patterns.
Resumo:
We prove a continuity result for the map sending a masa-bimodule to its support. We characterise the convergence of a net of weakly closed convex hulls of bilattices in terms of the convergence of the corresponding supports, and establish a lower-semicontinuity result for the map sending a support to the corresponding masa-bimodule.
Resumo:
The convergence of the iterative identification algorithm for a general Hammerstein system has been an open problem for a long time. In this paper, it is shown that the convergence can be achieved by incorporating a regularization procedure on the nonlinearity in addition to a normalization step on the parameters.
Resumo:
This paper presents a social simulation in which we add an additional layer of mass media communication to the social network 'bounded confidence' model of Deffuant et al (2000). A population of agents on a lattice with continuous opinions and bounded confidence adjust their opinions on the basis of binary social network interactions between neighbours or communication with a fixed opinion. There are two mechanisms for interaction. 'Social interaction' occurs between neighbours on a lattice and 'mass communication' adjusts opinions based on an agent interacting with a fixed opinion. Two new variables are added, polarisation: the degree to which two mass media opinions differ, and broadcast ratio: the number of social interactions for each mass media communication. Four dynamical regimes are observed, fragmented, double extreme convergence, a state of persistent opinion exchange leading to single extreme convergence and a disordered state. Double extreme convergence is found where agents are less willing to change opinion and mass media communications are common or where there is moderate willingness to change opinion and a high frequency of mass media communications. Single extreme convergence is found where there is moderate willingness to change opinion and a lower frequency of mass media communication. A period of persistent opinion exchange precedes single extreme convergence, it is characterized by the formation of two opposing groups of opinion separated by a gradient of opinion exchange. With even very low frequencies of mass media communications this results in a move to central opinions followed by a global drift to one extreme as one of the opposing groups of opinion dominates. A similar pattern of findings is observed for Neumann and Moore neighbourhoods.
Resumo:
This paper illuminates the role of political language in a peace process through analysing the discourse used by political parties in Northern Ireland. What matters, it seems, is not whether party discourses converge or diverge but rather how, and in what ways, they do so. In the case of Northern Ireland, there remains strong divergence between discourses regarding the ethos of unionist and nationalist parties. As a consequence, core definitions of identity, culture, norms and principle remain common grounds for competition within nationalism and unionism. There has, however, been a significant shift towards convergence between unionist and nationalist parties in their discourses on power and governance, specifically among the now predominant (hardline) and the smaller (moderate) parties. The argument thus elaborated is that political transition from conflict need not necessarily entail the creation of a “shared discourse” between all parties. Indeed, points of divergence between parties’ discourses of power and ethos are as important for a healthy post-conflict democratic environment as the elements of convergence between them.
Resumo:
This article examines why England and Wales have comparatively one of the most stringent systems for the governance of sexual offending within Western Europe. While England and Wales, like the USA, have adopted broadly exclusionary, managerialist penal policies based around incapacitation and targeted surveillance, many other Western European countries have opted for more inclusionary therapeutic interventions. Divergences in state approaches to sex offender risk, particularly in relation to notification and vetting schemes, are initially examined with reference to the respective theoretical frameworks of ‘policy transfer’ and differing political economies. Chiefly, however, differences in penal policies are attributed to the social and political construction of risk and its control. There may be multiple expressions of risk relating to expert, lay, moral or emotive aspects. It is argued, however, that it is the particular convergence and alignment of these dimensions on the part of the various stakeholders in the UK – government, media, public and professional – that leads to risk becoming institutionalized in the form of punitive regulatory policies for managing the dangerous.
Resumo:
Two- and three-photon detachment rates have been obtained for F- using several expansions in the R-matrix Floquet approach. These rates are compared with other theoretical and experimental results. The use of Hartree-Fock wavefunctions for the ground state of F with addition of continuum electrons does not lead to agreement with experiment for two- and three-photon detachment. By adding correlation terms, agreement with experiment and other theoretical results is improved considerably, demonstrating the importance of electron correlation effects. However, convergence with respect to the wavefunction expansion cannot be established, we also study the intensity dependence of multiphoton detachment rates for F- at the Nd-YAG frequency. Due to the ponderomotive shift the three-photon detachment channel closes at an intensity of 8.5 x 10(11) W cm(-2) and the influence of this channel closure on the multiphoton detachment peaks is illustrated by determining the heights of the excess-photon peaks obtained using a Gaussian laser pulse.
Resumo:
In many situations, the number of data points is fixed, and the asymptotic convergence results of popular model selection tools may not be useful. A new algorithm for model selection, RIVAL (removing irrelevant variables amidst Lasso iterations), is presented and shown to be particularly effective for a large but fixed number of data points. The algorithm is motivated by an application of nuclear material detection where all unknown parameters are to be non-negative. Thus, positive Lasso and its variants are analyzed. Then, RIVAL is proposed and is shown to have some desirable properties, namely the number of data points needed to have convergence is smaller than existing methods.