933 resultados para ALGORITHMIC CONVERGENCE
Resumo:
The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).
Resumo:
The low levels of unemployment recorded in the UK in recent years are widely cited asevidence of the country’s improved economic performance, and the apparent convergence of unemployment rates across the country’s regions used to suggest that the longstanding divide in living standards between the relatively prosperous ‘south’ and the more depressed ‘north’ has been substantially narrowed. Dissenters from theseconclusions have drawn attention to the greatly increased extent of non-employment(around a quarter of the UK’s working age population are not in employment) and themarked regional dimension in its distribution across the country. Amongst these dissenters it is generally agreed that non-employment is concentrated amongst oldermales previously employed in the now very much smaller ‘heavy’ industries (e.g. coal,steel, shipbuilding).This paper uses the tools of compositiona l data analysis to provide a much richer picture of non-employment and one which challenges the conventional analysis wisdom about UK labour market performance as well as the dissenters view of the nature of theproblem. It is shown that, associated with the striking ‘north/south’ divide in nonemployment rates, there is a statistically significant relationship between the size of the non-employment rate and the composition of non-employment. Specifically, it is shown that the share of unemployment in non-employment is negatively correlated with the overall non-employment rate: in regions where the non-employment rate is high the share of unemployment is relatively low. So the unemployment rate is not a very reliable indicator of regional disparities in labour market performance. Even more importantly from a policy viewpoint, a significant positive relationship is found between the size ofthe non-employment rate and the share of those not employed through reason of sicknessor disability and it seems (contrary to the dissenters) that this connection is just as strong for women as it is for men
Resumo:
This paper proposes a hybrid coordination method for behavior-based control architectures. The hybrid method takes advantages of the robustness and modularity in competitive approaches as well as optimized trajectories in cooperative ones. This paper shows the feasibility of applying this hybrid method with a 3D-navigation to an autonomous underwater vehicle (AUV). The behaviors are learnt online by means of reinforcement learning. A continuous Q-learning implemented with a feed-forward neural network is employed. Realistic simulations were carried out. The results obtained show the good performance of the hybrid method on behavior coordination as well as the convergence of the behaviors
Resumo:
The purpose of this paper is to propose a Neural-Q_learning approach designed for online learning of simple and reactive robot behaviors. In this approach, the Q_function is generalized by a multi-layer neural network allowing the use of continuous states and actions. The algorithm uses a database of the most recent learning samples to accelerate and guarantee the convergence. Each Neural-Q_learning function represents an independent, reactive and adaptive behavior which maps sensorial states to robot control actions. A group of these behaviors constitutes a reactive control scheme designed to fulfill simple missions. The paper centers on the description of the Neural-Q_learning based behaviors showing their performance with an underwater robot in a target following task. Real experiments demonstrate the convergence and stability of the learning system, pointing out its suitability for online robot learning. Advantages and limitations are discussed
Resumo:
Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior
Resumo:
This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV
Resumo:
L'objectiu d'aquest treball és explicar i fer la crítica de la Teoria de la Veritat recentment defensada per Apel. En primer lloc, el consens i pragmàtica de la Teoria de la Veritat d'Apel es presenta en relació amb el projecte de la Teoria Crítica de la Societat de Habermas i el problema dels fonaments en el raonament ètic. En segon lloc, la seva versió idealitzada i transcendental de la Veritat que invoca la noció de convergència en una comunitat ideal d'investigadors lliures és analitzada. Finalment, les entranyes de l'esperit wingensteinià i després de l'últim anàlisi de Putnam, s’ha intentat fer una avaluació crítica. El resultat de tot això serà una més modesta concepció de la Veritat com a tan sols una qualitat de la praxi lingüística humana, però no la seva primera pedra
Resumo:
Peroxisome proliferator-activated receptor (PPARs) are members of the nuclear receptor superfamily. For transcriptional activation of their target genes, PPARs heterodimerize with the retinoid-X receptor (RXR). The convergence of the PPAR and RXR signaling pathways has been shown to have an important function in lipid metabolism. The promoter of the gene encoding the acyl-coenzyme-A oxidase (ACO), the rate-limiting enzyme in peroxisomal beta-oxidation of fatty acids, is a target site of PPAR action. In this study, we examined the role and the contribution of both cis-and trans-acting factors in the transcriptional regulation of this gene using transient transfections in insect cells. We identified several functional cis-acting elements present in the promoter of the ACO gene and established that PPAR-dependent as well as PPAR-independent mechanisms can activate the ACO promoter in these cells. We show that the PPAR/RXR heterodimer exerts its effect through two response elements within the ACO promoter, in synergy with the transcription factor Sp1 via five Sp1-binding sites. Furthermore, this functional interaction also occurs when Sp1 is co-expressed with PPAR or RXR alone, indicating that activation can occur independently of PPAR/RXR heterodimers.
Resumo:
Le Cursus Romand de Médecine de Famille, appelé jusqu'en 2013 « Cursus Romand de Médecine Générale (CRMG), est né de la convergence de deux dynamiques. La première était locale : elle se situait autour du canton de Vaud avec le programme de formation postgraduée en médecine générale initié dans ce canton en 1999 sous l'impulsion du Dr. Fréchelin, du Prof. Pécoud et du Dr. Pilet. L'idée de départ de ce cursus strictement vaudois était de développer un programme de formation postgraduée permettant d'aider les médecins assistants à se former, mais également à promouvoir la médecine de famille et à créer une identité professionnelle forte. La seconde dynamique était politique : en 2005, lors d'une conférence de presse, la CDS annonce publiquement qu'une pénurie de médecins menace la Suisse. Parallèlement, le groupe de médecins ayant lancé le cursus Vaudois envisageait également d'étendre le territoire de la formation postgraduée de médecine générale, estimant que la formation postgrade des médecins de famille ne devait pas se cantonner aux cantons universitaires. [Extrait, p. 9]
Resumo:
First: A continuous-time version of Kyle's model (Kyle 1985), known as the Back's model (Back 1992), of asset pricing with asymmetric information, is studied. A larger class of price processes and of noise traders' processes are studied. The price process, as in Kyle's model, is allowed to depend on the path of the market order. The process of the noise traders' is an inhomogeneous Lévy process. Solutions are found by the Hamilton-Jacobi-Bellman equations. With the insider being risk-neutral, the price pressure is constant, and there is no equilibirium in the presence of jumps. If the insider is risk-averse, there is no equilibirium in the presence of either jumps or drifts. Also, it is analised when the release time is unknown. A general relation is established between the problem of finding an equilibrium and of enlargement of filtrations. Random announcement time is random is also considered. In such a case the market is not fully efficient and there exists equilibrium if the sensitivity of prices with respect to the global demand is time decreasing according with the distribution of the random time. Second: Power variations. it is considered, the asymptotic behavior of the power variation of processes of the form _integral_0^t u(s-)dS(s), where S_ is an alpha-stable process with index of stability 0&alpha&2 and the integral is an Itô integral. Stable convergence of corresponding fluctuations is established. These results provide statistical tools to infer the process u from discrete observations. Third: A bond market is studied where short rates r(t) evolve as an integral of g(t-s)sigma(s) with respect to W(ds), where g and sigma are deterministic and W is the stochastic Wiener measure. Processes of this type are particular cases of ambit processes. These processes are in general not of the semimartingale kind.
Resumo:
We present a KAM theory for some dissipative systems (geometrically, these are conformally symplectic systems, i.e. systems that transform a symplectic form into a multiple of itself). For systems with n degrees of freedom depending on n parameters we show that it is possible to find solutions with n-dimensional (Diophantine) frequencies by adjusting the parameters. We do not assume that the system is close to integrable, but we use an a-posteriori format. Our unknowns are a parameterization of the solution and a parameter. We show that if there is a sufficiently approximate solution of the invariance equation, which also satisfies some explicit non–degeneracy conditions, then there is a true solution nearby. We present results both in Sobolev norms and in analytic norms. The a–posteriori format has several consequences: A) smooth dependence on the parameters, including the singular limit of zero dissipation; B) estimates on the measure of parameters covered by quasi–periodic solutions; C) convergence of perturbative expansions in analytic systems; D) bootstrap of regularity (i.e., that all tori which are smooth enough are analytic if the map is analytic); E) a numerically efficient criterion for the break–down of the quasi–periodic solutions. The proof is based on an iterative quadratically convergent method and on suitable estimates on the (analytical and Sobolev) norms of the approximate solution. The iterative step takes advantage of some geometric identities, which give a very useful coordinate system in the neighborhood of invariant (or approximately invariant) tori. This system of coordinates has several other uses: A) it shows that for dissipative conformally symplectic systems the quasi–periodic solutions are attractors, B) it leads to efficient algorithms, which have been implemented elsewhere. Details of the proof are given mainly for maps, but we also explain the slight modifications needed for flows and we devote the appendix to present explicit algorithms for flows.
Resumo:
Tourette syndrome is a childhood-onset neuropsychiatric disorder with a high prevalence of attention deficit hyperactivity and obsessive-compulsive disorder co-morbidities. Structural changes have been found in frontal cortex and striatum in children and adolescents. A limited number of morphometric studies in Tourette syndrome persisting into adulthood suggest ongoing structural alterations affecting frontostriatal circuits. Using cortical thickness estimation and voxel-based analysis of T1- and diffusion-weighted structural magnetic resonance images, we examined 40 adults with Tourette syndrome in comparison with 40 age- and gender-matched healthy controls. Patients with Tourette syndrome showed relative grey matter volume reduction in orbitofrontal, anterior cingulate and ventrolateral prefrontal cortices bilaterally. Cortical thinning extended into the limbic mesial temporal lobe. The grey matter changes were modulated additionally by the presence of co-morbidities and symptom severity. Prefrontal cortical thickness reduction correlated negatively with tic severity, while volume increase in primary somatosensory cortex depended on the intensity of premonitory sensations. Orbitofrontal cortex volume changes were further associated with abnormal water diffusivity within grey matter. White matter analysis revealed changes in fibre coherence in patients with Tourette syndrome within anterior parts of the corpus callosum. The severity of motor tics and premonitory urges had an impact on the integrity of tracts corresponding to cortico-cortical and cortico-subcortical connections. Our results provide empirical support for a patho-aetiological model of Tourette syndrome based on developmental abnormalities, with perturbation of compensatory systems marking persistence of symptoms into adulthood. We interpret the symptom severity related grey matter volume increase in distinct functional brain areas as evidence of ongoing structural plasticity. The convergence of evidence from volume and water diffusivity imaging strengthens the validity of our findings and attests to the value of a novel multimodal combination of volume and cortical thickness estimations that provides unique and complementary information by exploiting their differential sensitivity to structural change.
Resumo:
RésuméLa naissance du premier enfant est un événement normatif à l'origine de nombreux changements dans le parcours de vie des hommes et des femmes. La présente recherche s'intéresse à la transition à la parentalité en tant que moment à l'origine d'un processus de stratification sociale. Trois dimensions dépendantes sont étudiées du point de vue de leur changement: l'insertion professionnelle, le travail domestique et la qualité de la relation conjugale. Les concepts de divergence et de convergence interindividuelle, élaborés à partir de l'hypothèse des dés/avantages cumulatifs et de l'hypothèse alternative des effets compensatoires, sont utilisés pour opérationnaliser le changement qui se produit dans ces trois dimensions lors de la naissance du premier enfant.Les résultats montrent, dans un premier moment, la présence de divergences entre les hommes et les femmes dans chacune de ces trois dimensions. Ces divergences inter-sexe sont associées à des convergences entre les individus de même sexe. Les analyses se focalisent dans un deuxième moment, sur les divergences et convergences ultérieures qui se produisent entre individus du même sexe en fonction des ressources sociales, culturelles et économiques initiales. Quelles sont ainsi, par exemple, les mères qui ne diminuent pas leur taux d'occupation initial? Quels sont les pères qui réduisent moins que les autres leur investissement dans le travail domestique? Quels sont les parents qui connaissent une moindre diminution de la qualité de leur relation conjugale? Les réponses données à ces questions montrent comment le processus de stratification sociale au sein d'une cohorte doit être expliqué en relation avec le changement qui se produit lors d'une transition spécifique et non seulement en tant que résultat du simple passage du temps.AbstractThe birth of the first child is a normative event creating important changes in the life course of men and women. This research analyzes the transition to parenthood as a moment creating social stratification. Three dependent dimensions are studied in their change: the occupational career, the domestic labour and the conjugal relationship's quality. The concepts of interindividual divergences and convergence, conceived from the cumulative dis/advantage hypothesis and the alternative hypothesis of compensatory effects, are used to operationalize the change in these three dimensions after the birth of the first child.Results show that, firstly, divergences take place between men and women becoming parents in the three dependent dimensions. Inter-sex divergences are associated to convergence between same-sex individuals. Secondly, the analyses focus on further di/convergences taking place between same-sex individuals, in relation with the initial social, cultural and economical resources. Who are the mothers who will not reduce their initial occupational rates? Who are the fathers who will reduce less than the others their involvement in the domestic tasks? Who are the parents who will experience a less important reduction in their conjugal relationship's quality? The answers to these questions show how the process of social stratification within a cohort has to be explained in relation with the change taking place during a specific transition and not only as a result of the simple passage of time.
Resumo:
Information Technologies and Documentation made the foundation of virtual libraries possible anywhere in the world, being the universities the institutions where the evolution towards the online supply of services for their users has evolved in a most important way. In Europe, the convergence in the European Higher Education Area, has forced university libraries to adapt to the functions that were assigned to them by the Declaration of Bologna. In Europe it is necessary to overcome some resistance to the necessary change. Besides the active participation of the librarians and information retrieval professionals, it is necessary to have information professionals that exert the necessary leadership and assure that to coordinate access to core health information delivery to health professionals and researchers efficiently and more cost-effectively through the implementation of novel technologies should be a major aim.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.