935 resultados para Unconditional Convergence


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a weakly nonlinear analysis of the interface dynamics in a radial Hele-Shaw cell driven by both injection and rotation. We extend the systematic expansion introduced in [E. Alvarez-Lacalle et al., Phys. Rev. E 64, 016302 (2001)] to the radial geometry, and compute explicitly the first nonlinear contributions. We also find the necessary and sufficient condition for the uniform convergence of the nonlinear expansion. Within this region of convergence, the analytical predictions at low orders are compared satisfactorily to exact solutions and numerical integration of the problem. This is particularly remarkable in configurations (with no counterpart in the channel geometry) for which the interplay between injection and rotation allows that condition to be satisfied at all times. In the case of the purely centrifugal forcing we demonstrate that nonlinear couplings make the interface more unstable for lower viscosity contrast between the fluids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a phase-field model for the dynamics of the interface between two inmiscible fluids with arbitrary viscosity contrast in a rectangular Hele-Shaw cell. With asymptotic matching techniques we check the model to yield the right Hele-Shaw equations in the sharp-interface limit, and compute the corrections to these equations to first order in the interface thickness. We also compute the effect of such corrections on the linear dispersion relation of the planar interface. We discuss in detail the conditions on the interface thickness to control the accuracy and convergence of the phase-field model to the limiting Hele-Shaw dynamics. In particular, the convergence appears to be slower for high viscosity contrasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Is it possible to build predictive models (PMs) of soil particle-size distribution (psd) in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index). The PMs explained more than half of the data variance. This performance is similar to (or even better than) that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd) of soils in regions of complex geology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimizing collective behavior in multiagent systems requires algorithms to find not only appropriate individual behaviors but also a suitable composition of agents within a team. Over the last two decades, evolutionary methods have emerged as a promising approach for the design of agents and their compositions into teams. The choice of a crossover operator that facilitates the evolution of optimal team composition is recognized to be crucial, but so far, it has never been thoroughly quantified. Here, we highlight the limitations of two different crossover operators that exchange entire agents between teams: restricted agent swapping (RAS) that exchanges only corresponding agents between teams and free agent swapping (FAS) that allows an arbitrary exchange of agents. Our results show that RAS suffers from premature convergence, whereas FAS entails insufficient convergence. Consequently, in both cases, the exploration and exploitation aspects of the evolutionary algorithm are not well balanced resulting in the evolution of suboptimal team compositions. To overcome this problem, we propose combining the two methods. Our approach first applies FAS to explore the search space and then RAS to exploit it. This mixed approach is a much more efficient strategy for the evolution of team compositions compared to either strategy on its own. Our results suggest that such a mixed agent-swapping algorithm should always be preferred whenever the optimal composition of individuals in a multiagent system is unknown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rapport de synthèse : L'article constituant le présent travail de thèse décrit une recherche portant sur des couples adultes recrutés dans la population générale, décrivant rétrospectivement les attitudes de leurs parents respectifs envers eux au cours de leur enfance. Le rôle joué par les attitudes d'attachement dans les relations adultes, et notamment les relations de couples, est bien démontré. De même, il est établi que les relations établies avec les parents dans l'enfance influencent le type d'attitude d'attachement qui prédominera à l'âge adulte. Dès lors, nous avons investigué l'existence, au sein de ces couples adultes, de souvenirs similaires quant aux attitudes prodiguées par les parents. Pour réaliser cette recherche, nous avons contacté tous les parents des enfants scolarisés en 2e/3e années et en 6e/7e années au sein des écoles de plusieurs communes de la région lausannoise, permettant de constituer un échantillon de 563 couples de parents. Au moyen d'autoquestionnaires, nous avons évalué pour chaque membre du couple : 1) sa description rétrospective des attitudes de ses deux parents envers lui pendant son enfance ; 2) le degré de sa symptomatologie psychiatrique actuelle ; et 3) son évaluation du degré d'ajustement dyadique actuel au sein du couple. La comparaison des scores des époux respectifs sur « l'échelle de lien parental » (PBI : Parental Bonding Instrument) a montré une ressemblance, au sein des couples, concernant la c chaleur et affection » (« Care ») témoignée au cours de l'enfance parle parent de même sexe que le sujet. Les analyses complémentaires effectuées semblent exclure que cette similarité soit due à des facteurs confondant comme l'âge, l'origine culturelle, le niveau socio-économique, ou le degré de symptomatologie psychiatrique. De même, cette similarité ne semble pas être attribuable à une convergence de vue grandissante des conjoints au cours de leur union. Par ailleurs, le degré d'ajustement dyadique s'est révélé être dépendant du degré cumulé de chaleur et d'affection tel que remémoré par chaque conjoint, et non du degré de similarité dans le couple pour ce qui concerne le souvenir de la chaleur et de l'affection reçues. Bien que basée sur des évaluations rétrospectives des attitudes parentales, et ne disposant pas d'une investigation psychiatrique standardisée incluant les critères diagnostiques, cette étude repose néanmoins sur un grand échantillon, recruté dans la population générale. Nos résultats ont notamment des implications concernant la santé des enfants. En effet, en raison de la similarité mise en évidence dans nos résultats, un enfant dont un des parents a reçu peu de chaleur et d'affection dans son enfance a plus de chances que son autre parent aie lui-même également reçu moins de chaleur et d'affection. De ce fait, d'une part l'ajustement dyadique du couple parental sera particulièrement bas, ce qui peut se répercuter sur les enfants du couple. D'autre part, comme les attitudes parentales se transmettent en partie de génération en génération, le même enfant risque d'être exposé, de la part de ses deux parents, à une attitude comportant peu de chaleur et d'affection, ce qui représente un risque de développement ultérieur des pathologies psychiatriques chez cet enfant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Late Triassic and Jurassic platform and the oceanic complexes in Evvoia, Greece, share a complementary plate-tectonic evolution. Shallow marine carbonate deposition responded to changing rates of subsidence and uplift, whilst the adjacent ocean underwent spreading, and then convergence, collision and finally obduction over the platform complex. Late Triassic ocean spreading correlated with platform subsidence and the formation of a long-persisting peritidal passive-margin platform. Incipient drowning occurred from the Sinemurian to the late Middle Jurassic. This subsidence correlated with intra-oceanic subduction and plate convergence that led to supra-subduction calc-alkaline magmatism and the formation of a primitive volcanic arc. During the Middle Jurassic, plate collision caused arc uplift above the carbonate compensation depth (CCD) in the oceanic realm, and related thrust-faulting, on the platform, led to sub-aerial exposures. Patch-reefs developed there during the Late Oxfordian to Kimmeridgian. Advanced oceanic nappe-loading caused platform drowning below the CCD during the Tithonian, which is documented by intercalations of reefal turbidites with non-carbonate radiolarites. Radiolarites and bypass-turbidites, consisting of siliciclastic greywacke, terminate the platform succession beneath the emplaced oceanic nappe during late Tithonian to Valanginian time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new regional database on GDP in Spain for the years 1860, 1900, 1914 and 1930. Following Geary and Stark (2002), country level GDP estimates are allocated across Spanish provinces. The results are then compared with previous estimates. Further, this new evidence is used to analyze the evolution of regional inequality and convergence in the long run. According to the distribution dynamics approach suggested by Quah (1993, 1996) persistence appears as a main feature in the regional distribution of output. Therefore, in the long run no evidence of regional convergence in the Spanish economy is found.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Partnerships in international migration governance promise a cooperative approach between countries of origin, transit and destination. The literature has generally conceptualised migration partnerships as a policy instrument. This article suggests that understanding the broader transformations taking place in international migration governance under the rubric of partnership demands a novel analysis. Using a governmentality perspective, I interpret migration partnerships as an instance of neoliberal rule. Focusing on the convergence of international migration governance between the international realm and the European and North American region in particular, I demonstrate that the partnership approach frames international migration governance so as to enlist governments, migrants and particular experts in governing international migration, and invokes specific technologies of neoliberal governing which contribute to producing responsible, self-disciplined partners who can be trusted to govern themselves according to the norms established by the partnership discourse. The partnership approach is not a mere policy instrument; it goes beyond the European region and has become an essential element of the governance of international migration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identity [r]evolution is happening. Who are you, who am I in the information society? In recent years, the convergence of several factors - technological, political, economic - has accelerated a fundamental change in our networked world. On a technological level, information becomes easier to gather, to store, to exchange and to process. The belief that more information brings more security has been a strong political driver to promote information gathering since September 11. Profiling intends to transform information into knowledge in order to anticipate one's behaviour, or needs, or preferences. It can lead to categorizations according to some specific risk criteria, for example, or to direct and personalized marketing. As a consequence, new forms of identities appear. They are not necessarily related to our names anymore. They are based on information, on traces that we leave when we act or interact, when we go somewhere or just stay in one place, or even sometimes when we make a choice. They are related to the SIM cards of our mobile phones, to our credit card numbers, to the pseudonyms that we use on the Internet, to our email addresses, to the IP addresses of our computers, to our profiles... Like traditional identities, these new forms of identities can allow us to distinguish an individual within a group of people, or describe this person as belonging to a community or a category. How far have we moved through this process? The identity [r]evolution is already becoming part of our daily lives. People are eager to share information with their "friends" in social networks like Facebook, in chat rooms, or in Second Life. Customers take advantage of the numerous bonus cards that are made available. Video surveillance is becoming the rule. In several countries, traditional ID documents are being replaced by biometric passports with RFID technologies. This raises several privacy issues and might actually even result in changing the perception of the concept of privacy itself, in particular by the younger generation. In the information society, our (partial) identities become the illusory masks that we choose -or that we are assigned- to interplay and communicate with each other. Rights, obligations, responsibilities, even reputation are increasingly associated with these masks. On the one hand, these masks become the key to access restricted information and to use services. On the other hand, in case of a fraud or negative reputation, the owner of such a mask can be penalized: doors remain closed, access to services is denied. Hence the current preoccupying growth of impersonation, identity-theft and other identity-related crimes. Where is the path of the identity [r]evolution leading us? The booklet is giving a glance on possible scenarios in the field of identity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce texte vise à mettre en évidence les problèmes d'insertion que connaissent les personnes en situation de pauvreté. Les programmes qui oeuvrent à leur intégration s'avèrent complexes à mettre sur pied. Leurs résultats sont d'ailleurs délicats à mesurer. Lorsqu'il est question d'insertion de personnes en marge, leur motivation à se sortir de leur situation est souvent mise en évidence. Pourtant, pour qu'une intégration puisse aboutir, il est nécessaire qu'il y ait convergence d'intérêts entre, d'une part, les personnes ou les groupes à intégrer et, de l'autre, le groupe intégrant, à savoir la grande majorité de la population ou encore les employeurs potentiels. Dans une première partie, ces pages décrivent les problèmes conceptuels qui se posent lorsque l'on décrit le statut des pauvres dans les sociétés développées. Au travers du statut de pauvreté, conféré au personnes concernées, il est surtout question d'une régulation sociales. Dans une seconde partie, ce texte présente les résultats de l'observation d'une vingtaine de trajectoires de bénéficiaires de mesures d'insertion sociale dites "bas seuil" dans le canton de Vaud. Ces personnes se caractérisent par une situation de mise en marge des réseaux économiques et sociaux.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nuclear hormone receptors called PPARs (peroxisome proliferator-activated receptors alpha, beta, and gamma) regulate the peroxisomal beta-oxidation of fatty acids by induction of the acyl-CoA oxidase gene that encodes the rate-limiting enzyme of the pathway. Gel retardation and cotransfection assays revealed that PPAR alpha heterodimerizes with retinoid X receptor beta (RXR beta; RXR is the receptor for 9-cis-retinoic acid) and that the two receptors cooperate for the activation of the acyl-CoA oxidase gene promoter. The strongest stimulation of this promoter was obtained when both receptors were exposed simultaneously to their cognate activators. Furthermore, we show that natural fatty acids, and especially polyunsaturated fatty acids, activate PPARs as potently as does the hypolipidemic drug Wy 14,643, the most effective activator known so far. Moreover, we discovered that the synthetic arachidonic acid analogue 5,8,11,14-eicosatetraynoic acid is 100 times more effective than Wy 14,643 in the activation of PPAR alpha. In conclusion, our data demonstrate a convergence of the PPAR and RXR signaling pathways in the regulation of the peroxisomal beta-oxidation of fatty acids by fatty acids and retinoids.