606 resultados para recursive detrending
Resumo:
A growing literature considers the impact of uncertainty using SVAR models that include proxies for uncertainty shocks as endogenous variables. In this paper we consider the impact of measurement error in these proxies on the estimated impulse responses. We show via a Monte-Carlo experiment that measurement error can result in attenuation bias in impulse responses. In contrast, the proxy SVAR that uses the uncertainty shock proxy as an instrument does not su¤er from this bias. Applying this latter method to the Bloom (2009) data-set results in impulse responses to uncertainty shocks that are larger in magnitude and more persistent than those obtained from a recursive SVAR.
Resumo:
In recent years, Deep Learning (DL) techniques have gained much at-tention from Artificial Intelligence (AI) and Natural Language Processing (NLP) research communities because these approaches can often learn features from data without the need for human design or engineering interventions. In addition, DL approaches have achieved some remarkable results. In this paper, we have surveyed major recent contributions that use DL techniques for NLP tasks. All these reviewed topics have been limited to show contributions to text understand-ing, such as sentence modelling, sentiment classification, semantic role labelling, question answering, etc. We provide an overview of deep learning architectures based on Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Recursive Neural Networks (RNNs).
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.
Resumo:
In this work project we study the tail properties of currency returns and analyze whether changes in the tail indices of these series have occurred over time as a consequence of turbulent periods. Our analysis is based on the methods introduced by Quintos, Fan and Phillips (2001), Candelon and Straetmans (2006, 2013), and their extensions. Specifically, considering a sample of daily data from December 31, 1993 to February 13, 2015 we apply the recursive test in calendar time (forward test) and in reverse calendar time (backward test) and indeed detect falls and rises in the tail indices, signifying increases and decreases in the probability of extreme events.
Resumo:
Brain metastases occur in 20-50% of NSCLC and 50-80% of SCLC. In this review, we will look at evidence-based medicine data and give some perspectives on the management of BM. We will address the problems of multiple BM, single BM and prophylactic cranial irradiation. Recursive Partitioning Analysis (RPA) is a powerful prognostic tool to facilitate treatment decisions. Dealing with multiple BM, the use of corticosteroids was established more than 40 years ago by a unique randomized trial (RCT). Palliative effect is high (_80%) as well as side-effects. Whole brain radiotherapy (WBRT) was evaluated in many RCTs with a high (60-90%) response rate; several RT regimes are equivalent, but very high dose per fraction should be avoided. In multiple BM from SCLC, the effect of WBRT is comparable to that in NSCLC but chemotherapy (CXT) although advocated is probably less effective than RT. Single BM from NSCLC occurs in 30% of all BM cases; several prognostic classifications including RPA are very useful. Several options are available in single BM: WBRT, surgery (SX), radiosurgery (RS) or any combination of these. All were studied in RCTs and will be reviewed: the addition of WBRT to SX or RS gives a better neurological tumour control, has little or no impact on survival, and may be more toxic. However omitting WBRT after SX alone gives a higher risk of cerebro-spinal fluid dissemination. Prophylactic cranial irradiation (PCI) has a major role in SCLC. In limited disease, meta-analyses have shown a positive impact of PCI in the decrease of brain relapse and in survival improvement, especially for patients in complete remission. Surprisingly, this has been recently confirmed also in extensive disease. Experience with PCI for NSCLC is still limited, but RCT suggest a reduction of BM with no impact on survival. Toxicity of PCI is a matter of debate, as neurological or neuro-cognitive impairment is already present prior to PCI in almost half of patients. However RT toxicity is probably related to total dose and dose per fraction. Perspectives : Future research should concentrate on : 1) combined modalities in multiple BM. 2) Exploration of treatments in oligo-metastases. 3) Further exploration of PCI in NSCLC. 4) Exploration of new, toxicity-sparing radiotherapy techniques (IMRT, Tomotherapy etc).
Resumo:
The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Spatial data representation and compression has become a focus issue in computer graphics and image processing applications. Quadtrees, as one of hierarchical data structures, basing on the principle of recursive decomposition of space, always offer a compact and efficient representation of an image. For a given image, the choice of quadtree root node plays an important role in its quadtree representation and final data compression. The goal of this thesis is to present a heuristic algorithm for finding a root node of a region quadtree, which is able to reduce the number of leaf nodes when compared with the standard quadtree decomposition. The empirical results indicate that, this proposed algorithm has quadtree representation and data compression improvement when in comparison with the traditional method.
Resumo:
Ontario bansho is an emergent mathematics instructional strategy used by teachers working within communities of practice that has been deemed to have a transformational effect on teachers' professional learning of mathematics. This study sought to answer the following question: How does teachers' implementation of Ontario bansho within their communities of practice inform their professional learning process concerning mathematics-for-teaching? Two other key questions also guided the study: What processes support teachers' professional learning of content-for-teaching? What conditions support teachers' professional learning of content-for-teaching? The study followed an interpretive phenomenological approach to collect data using a purposive sampling of teachers as participants. The researcher conducted interviews and followed an interpretive approach to data analysis to investigate how teachers construct meaning and create interpretations through their social interactions. The study developed a model of professional learning made up of 3 processes, informing with resources, engaging with students, and visualizing and schematizing in which the participants engaged and 2 conditions, ownership and community that supported the 3 processes. The 3 processes occur in ways that are complex, recursive, nonpredictable, and contextual. This model provides a framework for facilitators and leaders to plan for effective, content-relevant professional learning by placing teachers, students, and their learning at the heart of professional learning.
Resumo:
Despite recent well-known advancements in patient care in the medical fields, such as patient-centeredness and evidence-based medicine and practice, there is rather less known about their effects on the particulars of clinician-patient encounters. The emphasis in clinical encounters remains mostly on treatment and diagnosis and less on communicative competency or engagement for medical professionals. The purpose of this narrative study was to explore interactive competencies in diagnostic and therapeutic encounters and intake protocols within the context of the physicians’, nurses’, and medical receptionists’ perspectives and experiences. Literature on narrative medicine, phenomenology and medicine, therapeutic relationships, cultural and communication competency, and non-Western perspectives on human communication provided the guiding theoretical frameworks for the study. Three data sets including 13 participant interviews (5 physicians, 4 nurses, and 4 medical receptionists), policy documents (physicians, nurses, and medical receptionists) and a website (Communication and Cultural Competency) were used. The researcher then engaged in triangulated analyses, including N-Vivo, manifest and latent, Mishler’s (1984, 1995) narrative elements and Charon’s (2005, 2006a, 2006b, 2013) narrative themes, in recursive, overlapping, comparative and intersected analysis strategies. A common factor affecting physicians’ relationships with their clients was limitation of time, including limited time (a) to listen, (b) to come up with a proper diagnosis, and (c) to engage in decision making in critical conditions and limited time for patients’ visits. For almost all nurse participants in the study establishing therapeutic relationships meant being compassionate and empathetic. The goals of intake protocols for the medical receptionists were about being empathetic to patients, being an attentive listener, developing rapport, and being conventionally polite to patients. Participants with the least iv amount of training and preparation (medical receptionists) appeared to be more committed to working narratively in connecting with patients and establishing human relationships as well as in listening to patients’ stories and providing support to narrow down the reason for their visit. The diagnostic and intake “success stories” regarding patient clinical encounters for other study participants were focused on a timely securing of patient information, with some acknowledgement of rapport and emapathy. Patient-centeredness emerged as a discourse practice, with ambiguous or nebulous enactment of its premises in most clinical settings.
Resumo:
This paper assesses the empirical performance of an intertemporal option pricing model with latent variables which generalizes the Hull-White stochastic volatility formula. Using this generalized formula in an ad-hoc fashion to extract two implicit parameters and forecast next day S&P 500 option prices, we obtain similar pricing errors than with implied volatility alone as in the Hull-White case. When we specialize this model to an equilibrium recursive utility model, we show through simulations that option prices are more informative than stock prices about the structural parameters of the model. We also show that a simple method of moments with a panel of option prices provides good estimates of the parameters of the model. This lays the ground for an empirical assessment of this equilibrium model with S&P 500 option prices in terms of pricing errors.
Resumo:
This paper develops a general stochastic framework and an equilibrium asset pricing model that make clear how attitudes towards intertemporal substitution and risk matter for option pricing. In particular, we show under which statistical conditions option pricing formulas are not preference-free, in other words, when preferences are not hidden in the stock and bond prices as they are in the standard Black and Scholes (BS) or Hull and White (HW) pricing formulas. The dependence of option prices on preference parameters comes from several instantaneous causality effects such as the so-called leverage effect. We also emphasize that the most standard asset pricing models (CAPM for the stock and BS or HW preference-free option pricing) are valid under the same stochastic setting (typically the absence of leverage effect), regardless of preference parameter values. Even though we propose a general non-preference-free option pricing formula, we always keep in mind that the BS formula is dominant both as a theoretical reference model and as a tool for practitioners. Another contribution of the paper is to characterize why the BS formula is such a benchmark. We show that, as soon as we are ready to accept a basic property of option prices, namely their homogeneity of degree one with respect to the pair formed by the underlying stock price and the strike price, the necessary statistical hypotheses for homogeneity provide BS-shaped option prices in equilibrium. This BS-shaped option-pricing formula allows us to derive interesting characterizations of the volatility smile, that is, the pattern of BS implicit volatilities as a function of the option moneyness. First, the asymmetry of the smile is shown to be equivalent to a particular form of asymmetry of the equivalent martingale measure. Second, this asymmetry appears precisely when there is either a premium on an instantaneous interest rate risk or on a generalized leverage effect or both, in other words, whenever the option pricing formula is not preference-free. Therefore, the main conclusion of our analysis for practitioners should be that an asymmetric smile is indicative of the relevance of preference parameters to price options.
Resumo:
Conditional heteroskedasticity is an important feature of many macroeconomic and financial time series. Standard residual-based bootstrap procedures for dynamic regression models treat the regression error as i.i.d. These procedures are invalid in the presence of conditional heteroskedasticity. We establish the asymptotic validity of three easy-to-implement alternative bootstrap proposals for stationary autoregressive processes with m.d.s. errors subject to possible conditional heteroskedasticity of unknown form. These proposals are the fixed-design wild bootstrap, the recursive-design wild bootstrap and the pairwise bootstrap. In a simulation study all three procedures tend to be more accurate in small samples than the conventional large-sample approximation based on robust standard errors. In contrast, standard residual-based bootstrap methods for models with i.i.d. errors may be very inaccurate if the i.i.d. assumption is violated. We conclude that in many empirical applications the proposed robust bootstrap procedures should routinely replace conventional bootstrap procedures for autoregressions based on the i.i.d. error assumption.
Resumo:
En écologie, dans le cadre par exemple d’études des services fournis par les écosystèmes, les modélisations descriptive, explicative et prédictive ont toutes trois leur place distincte. Certaines situations bien précises requièrent soit l’un soit l’autre de ces types de modélisation ; le bon choix s’impose afin de pouvoir faire du modèle un usage conforme aux objectifs de l’étude. Dans le cadre de ce travail, nous explorons dans un premier temps le pouvoir explicatif de l’arbre de régression multivariable (ARM). Cette méthode de modélisation est basée sur un algorithme récursif de bipartition et une méthode de rééchantillonage permettant l’élagage du modèle final, qui est un arbre, afin d’obtenir le modèle produisant les meilleures prédictions. Cette analyse asymétrique à deux tableaux permet l’obtention de groupes homogènes d’objets du tableau réponse, les divisions entre les groupes correspondant à des points de coupure des variables du tableau explicatif marquant les changements les plus abrupts de la réponse. Nous démontrons qu’afin de calculer le pouvoir explicatif de l’ARM, on doit définir un coefficient de détermination ajusté dans lequel les degrés de liberté du modèle sont estimés à l’aide d’un algorithme. Cette estimation du coefficient de détermination de la population est pratiquement non biaisée. Puisque l’ARM sous-tend des prémisses de discontinuité alors que l’analyse canonique de redondance (ACR) modélise des gradients linéaires continus, la comparaison de leur pouvoir explicatif respectif permet entre autres de distinguer quel type de patron la réponse suit en fonction des variables explicatives. La comparaison du pouvoir explicatif entre l’ACR et l’ARM a été motivée par l’utilisation extensive de l’ACR afin d’étudier la diversité bêta. Toujours dans une optique explicative, nous définissons une nouvelle procédure appelée l’arbre de régression multivariable en cascade (ARMC) qui permet de construire un modèle tout en imposant un ordre hiérarchique aux hypothèses à l’étude. Cette nouvelle procédure permet d’entreprendre l’étude de l’effet hiérarchisé de deux jeux de variables explicatives, principal et subordonné, puis de calculer leur pouvoir explicatif. L’interprétation du modèle final se fait comme dans une MANOVA hiérarchique. On peut trouver dans les résultats de cette analyse des informations supplémentaires quant aux liens qui existent entre la réponse et les variables explicatives, par exemple des interactions entres les deux jeux explicatifs qui n’étaient pas mises en évidence par l’analyse ARM usuelle. D’autre part, on étudie le pouvoir prédictif des modèles linéaires généralisés en modélisant la biomasse de différentes espèces d’arbre tropicaux en fonction de certaines de leurs mesures allométriques. Plus particulièrement, nous examinons la capacité des structures d’erreur gaussienne et gamma à fournir les prédictions les plus précises. Nous montrons que pour une espèce en particulier, le pouvoir prédictif d’un modèle faisant usage de la structure d’erreur gamma est supérieur. Cette étude s’insère dans un cadre pratique et se veut un exemple pour les gestionnaires voulant estimer précisément la capture du carbone par des plantations d’arbres tropicaux. Nos conclusions pourraient faire partie intégrante d’un programme de réduction des émissions de carbone par les changements d’utilisation des terres.
Resumo:
Le sujet visé par cette dissertation est la logique ordinale de Turing. Nous nous référons au texte original de Turing «Systems of logic based on ordinals» (Turing [1939]), la thèse que Turing rédigea à Princeton sous la direction du professeur Alonzo Church. Le principe d’une logique ordinale consiste à surmonter localement l’incomplétude gödelienne pour l’arithmétique par le biais de progressions d’axiomes récursivement consistantes. Étant donné son importance considérable pour la théorie de la calculabilité et les fondements des mathématiques, cette recherche méconnue de Turing mérite une attention particulière. Nous retraçons ici le projet d’une logique ordinale, de ses origines dans le théorème d’incomplétude de Gödel jusqu'à ses avancées dans les développements de la théorie de la calculabilité. Nous concluons par une discussion philosophique sur les fondements des mathématiques en fonction d’un point de vue finitiste.
Resumo:
Dans certaines circonstances, des actions de groupes sont plus performantes que des actions individuelles. Dans ces situations, il est préférable de former des coalitions. Ces coalitions peuvent être disjointes ou imbriquées. La littérature économique met un fort accent sur la modélisation des accords où les coalitions d’agents économiques sont des ensembles disjoints. Cependant on observe dans la vie de tous les jours que les coalitions politiques, environnementales, de libre-échange et d’assurance informelles sont la plupart du temps imbriquées. Aussi, devient-il impératif de comprendre le fonctionnement économique des coalitions imbriquées. Ma thèse développe un cadre d’analyse qui permet de comprendre la formation et la performance des coalitions même si elles sont imbriquées. Dans le premier chapitre je développe un jeu de négociation qui permet la formation de coalitions imbriquées. Je montre que ce jeu admet un équilibre et je développe un algorithme pour calculer les allocations d’équilibre pour les jeux symétriques. Je montre que toute structure de réseau peut se décomposer de manière unique en une structure de coalitions imbriquées. Sous certaines conditions, je montre que cette structure correspond à une structure d’équilibre d’un jeu sous-jacent. Dans le deuxième chapitre j’introduis une nouvelle notion de noyau dans le cas où les coalitions imbriquées sont permises. Je montre que cette notion de noyau est une généralisation naturelle de la notion de noyau de structure de coalitions. Je vais plus loin en introduisant des agents plus raffinés. J’obtiens alors le noyau de structure de coalitions imbriquées que je montre être un affinement de la première notion. Dans la suite de la thèse, j’applique les théories développées dans les deux premiers chapitres à des cas concrets. Le troisième chapitre est une application de la relation biunivoque établie dans le premier chapitre entre la formation des coalitions et la formation de réseaux. Je propose une modélisation réaliste et effective des assurances informelles. J’introduis ainsi dans la littérature économique sur les assurances informelles, quatre innovations majeures : une fusion entre l’approche par les groupes et l’approche par les réseaux sociaux, la possibilité d’avoir des organisations imbriquées d’assurance informelle, un schéma de punition endogène et enfin les externalités. Je caractérise les accords d’assurances informelles stables et j’isole les conditions qui poussent les agents à dévier. Il est admis dans la littérature que seuls les individus ayant un revenu élevé peuvent se permettre de violer les accords d’assurances informelles. Je donne ici les conditions dans lesquelles cette hypothèse tient. Cependant, je montre aussi qu’il est possible de violer cette hypothèse sous d’autres conditions réalistes. Finalement je dérive des résultats de statiques comparées sous deux normes de partage différents. Dans le quatrième et dernier chapitre, je propose un modèle d’assurance informelle où les groupes homogènes sont construits sur la base de relations de confiance préexistantes. Ces groupes sont imbriqués et représentent des ensembles de partage de risque. Cette approche est plus générale que les approches traditionnelles de groupe ou de réseau. Je caractérise les accords stables sans faire d’hypothèses sur le taux d’escompte. J’identifie les caractéristiques des réseaux stables qui correspondent aux taux d’escomptes les plus faibles. Bien que l’objectif des assurances informelles soit de lisser la consommation, je montre que des effets externes liés notamment à la valorisation des liens interpersonnels renforcent la stabilité. Je développe un algorithme à pas finis qui égalise la consommation pour tous les individus liés. Le fait que le nombre de pas soit fini (contrairement aux algorithmes à pas infinis existants) fait que mon algorithme peut inspirer de manière réaliste des politiques économiques. Enfin, je donne des résultats de statique comparée pour certaines valeurs exogènes du modèle.