170 resultados para premature convergence problem
Resumo:
First: A continuous-time version of Kyle's model (Kyle 1985), known as the Back's model (Back 1992), of asset pricing with asymmetric information, is studied. A larger class of price processes and of noise traders' processes are studied. The price process, as in Kyle's model, is allowed to depend on the path of the market order. The process of the noise traders' is an inhomogeneous Lévy process. Solutions are found by the Hamilton-Jacobi-Bellman equations. With the insider being risk-neutral, the price pressure is constant, and there is no equilibirium in the presence of jumps. If the insider is risk-averse, there is no equilibirium in the presence of either jumps or drifts. Also, it is analised when the release time is unknown. A general relation is established between the problem of finding an equilibrium and of enlargement of filtrations. Random announcement time is random is also considered. In such a case the market is not fully efficient and there exists equilibrium if the sensitivity of prices with respect to the global demand is time decreasing according with the distribution of the random time. Second: Power variations. it is considered, the asymptotic behavior of the power variation of processes of the form _integral_0^t u(s-)dS(s), where S_ is an alpha-stable process with index of stability 0&alpha&2 and the integral is an Itô integral. Stable convergence of corresponding fluctuations is established. These results provide statistical tools to infer the process u from discrete observations. Third: A bond market is studied where short rates r(t) evolve as an integral of g(t-s)sigma(s) with respect to W(ds), where g and sigma are deterministic and W is the stochastic Wiener measure. Processes of this type are particular cases of ambit processes. These processes are in general not of the semimartingale kind.
Resumo:
We formulate a necessary and sufficient condition for polynomials to be dense in a space of continuous functions on the real line, with respect to Bernstein's weighted uniform norm. Equivalently, for a positive finite measure [lletra "mu" minúscula de l'alfabet grec] on the real line we give a criterion for density of polynomials in Lp[lletra "mu" minúscula de l'alfabet grec entre parèntesis].
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
The EU has, since the early days of the Community, had the ambition to speak with ‘a single voice’ in international fora, in particular in the United Nations’ General Assembly. This aspiration, which has become more pronounced since the inauguration of the CFSP, has not always been easy to achieve due to domestic or international level factors affecting the EU member states. However, in the last decade there has been a dramatic increase in convergence in the Fifteen’s voting record. This paper contemplates the underlying reasons for such a convergence
Resumo:
The application of Discriminant function analysis (DFA) is not a new idea in the studyof tephrochrology. In this paper, DFA is applied to compositional datasets of twodifferent types of tephras from Mountain Ruapehu in New Zealand and MountainRainier in USA. The canonical variables from the analysis are further investigated witha statistical methodology of change-point problems in order to gain a betterunderstanding of the change in compositional pattern over time. Finally, a special caseof segmented regression has been proposed to model both the time of change and thechange in pattern. This model can be used to estimate the age for the unknown tephrasusing Bayesian statistical calibration
Resumo:
This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.
Resumo:
The information and communication technologies (ICT) sectors are in a process of technological convergence. Determinant factors in this process are the liberalisation of the telecommunications markets and technological change. Many firms are engaged in a process of mergers and alliances to position themselves in this new framework. Technological and demand uncertainties are very important. Our objective in this paper is to study the economic determinants of the strategies of the firms. With this aim, we review some key technological and demand aspects. We shed some light on the strategic motivations of the firms by establishing a parallel with the evolution of the retailing sector
Resumo:
Algoritmo que optimiza y crea pairings para tripulaciones de líneas aéreas mediante la posterior programación en Java.
Resumo:
The level of ab initio theory which is necessary to compute reliable values for the static and dynamic (hyper)polarizabilities of three medium size π-conjugated organic nonlinear optical (NLO) molecules is investigated. With the employment of field-induced coordinates in combination with a finite field procedure, the calculations were made possible. It is stated that to obtain reasonable values for the various individual contributions to the (hyper)polarizability, it is necessary to include electron correlation. Based on the results, the convergence of the usual perturbation treatment for vibrational anharmonicity was examined
Resumo:
Initial convergence of the perturbation series expansion for vibrational nonlinear optical (NLO) properties was analyzed. The zero-point vibrational average (ZPVA) was obtained through first-order in mechanical plus electrical anharmonicity. Results indicated that higher-order terms in electrical and mechanical anharmonicity can make substantial contributions to the pure vibrational polarizibility of typical NLO molecules
Resumo:
Recently, the surprising result that ab initio calculations on benzene and other planar arenes at correlated MP2, MP3, configuration interaction with singles and doubles (CISD), and coupled cluster with singles and doubles levels of theory using standard Pople’s basis sets yield nonplanar minima has been reported. The planar optimized structures turn out to be transition states presenting one or more large imaginary frequencies, whereas single-determinant-based methods lead to the expected planar minima and no imaginary frequencies. It has been suggested that such anomalous behavior can be originated by two-electron basis set incompleteness error. In this work, we show that the reported pitfalls can be interpreted in terms of intramolecular basis set superposition error (BSSE) effects, mostly between the C–H moieties constituting the arenes. We have carried out counterpoise-corrected optimizations and frequency calculations at the Hartree–Fock, B3LYP, MP2, and CISD levels of theory with several basis sets for a number of arenes. In all cases, correcting for intramolecular BSSE fixes the anomalous behavior of the correlated methods, whereas no significant differences are observed in the single-determinant case. Consequently, all systems studied are planar at all levels of theory. The effect of different intramolecular fragment definitions and the particular case of charged species, namely, cyclopentadienyl and indenyl anions, respectively, are also discussed
Resumo:
Standard practice in Bayesian VARs is to formulate priors on the autoregressive parameters, but economists and policy makers actually have priors about the behavior of observable variables. We show how this kind of prior can be used in a VAR under strict probability theory principles. We state the inverse problem to be solved and we propose a numerical algorithm that works well in practical situations with a very large number of parameters. We prove various convergence theorems for the algorithm. As an application, we first show that the results in Christiano et al. (1999) are very sensitive to the introduction of various priors that are widely used. These priors turn out to be associated with undesirable priors on observables. But an empirical prior on observables helps clarify the relevance of these estimates: we find much higher persistence of output responses to monetary policy shocks than the one reported in Christiano et al. (1999) and a significantly larger total effect.
Resumo:
From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
We start with a generalization of the well-known three-door problem:the n-door problem. The solution of this new problem leads us toa beautiful representation system for real numbers in (0,1] as alternated series, known in the literature as Pierce expansions. A closer look to Pierce expansions will take us to some metrical properties of sets defined through the Pierce expansions of its elements. Finally, these metrical properties will enable us to present 'strange' sets, similar to the classical Cantor set.