920 resultados para Sub-registry. Empirical bayesian estimator. General equation. Balancing adjustment factor
Resumo:
In diabetes mellitus, it is expected to see a common, mainly sensitive, distal symmetrical polyneuropathy (DPN) involving a large proportion of diabetic patients according to known risk factors. Several other diabetic peripheral neuropathies are recognized, such as dysautonomia and multifocal neuropathies including lumbosacral radiculoplexus and oculomotor palsies. In this review, general aspects of DPN and other diabetic neuropathies are examined, and it is discussed why and how the general practionner has to perform a yearly examination. At the present time, some consensuses emerge to ask help from neurologist when faced to other forms of peripheral neuropathies than distal symmetrical DPN.
Resumo:
ABSTRACT : Research in empirical asset pricing has pointed out several anomalies both in the cross section and time series of asset prices, as well as in investors' portfolio choice. This dissertation aims to discover the forces driving some of these "puzzling" asset pricing dynamics and portfolio decisions observed in the financial market. Through the dissertation I construct and study dynamic general equilibrium models of heterogeneous investors in the presence of frictions and evaluate quantitatively their implications for financial-market asset prices and portfolio choice. I also explore the potential roots of puzzles in international finance. Chapter 1 shows that, by introducing jointly endogenous no-default type of borrowing constraints and heterogeneous beliefs in a dynamic general-equilibrium economy, many empirical features of stock return volatility can be reproduced. While most of the research on stock return volatility is empirical, this paper provides a theoretical framework that is able to reproduce simultaneously the cross section and time series stylized facts concerning stock returns and their volatility. In contrast to the existing theoretical literature related to stock return volatility, I don't impose persistence or regimes in any of the exogenous state variables or in preferences. Volatility clustering, asymmetry in the stock return-volatility relationship, and pricing of multi-factor volatility components in the cross section all arise endogenously as a consequence of the feedback between the binding of no-default constraints and heterogeneous beliefs. Chapters 2 and 3 explore the implications of differences of opinion across investors in different countries for international asset pricing anomalies. Chapter 2 demonstrates that several international finance "puzzles" can be reproduced by a single risk factor which captures heterogeneous beliefs across international investors. These puzzles include: (i) home equity preference; (ii) the dependence of firm returns on local and foreign factors; (iii) the co-movement of returns and international capital flows; and (iv) abnormal returns around foreign firm cross-listing events in the local market. These are reproduced in a setup with symmetric information and in a perfectly integrated world with multiple countries and independent processes producing the same good. Chapter 3 shows that by extending this framework to multiple goods and correlated production processes; the "forward premium puzzle" arises naturally as a compensation for the heterogeneous expectations about the depreciation of the exchange rate held by international investors. Chapters 2 and 3 propose differences of opinion across international investors as the potential resolution of several international finance `puzzles'. In a globalized world where both capital and information flow freely across countries, this explanation seems more appealing than existing asymmetric information or segmented markets theories aiming to explain international finance puzzles.
Resumo:
In this paper we study, having as theoretical reference the economic model of crime (Becker, 1968; Ehrlich, 1973), which are the socioeconomic and demographic determinants of crime in Spain paying attention on the role of provincial peculiarities. We estimate a crime equation using a panel dataset of Spanish provinces (NUTS3) for the period 1993 to 1999 employing the GMMsystem estimator. Empirical results suggest that lagged crime rate and clear-up rate are correlated to all typologies of crime rate considered. Property crimes are better explained by socioeconomic variables (GDP per capita, GDP growth rate and percentage of population with high school and university degree), while demographic factors reveal important and significant influences, in particular for crimes against the person. These results are obtained using an instrumental variable approach that takes advantage of the dynamic properties of our dataset to control for both measurement errors in crime data and joint endogeneity of the explanatory variables
Resumo:
In this paper we study, having as theoretical reference the economic model of crime (Becker, 1968; Ehrlich, 1973), which are the socioeconomic and demographic determinants of crime in Spain paying attention on the role of provincial peculiarities. We estimate a crime equation using a panel dataset of Spanish provinces (NUTS3) for the period 1993 to 1999 employing the GMMsystem estimator. Empirical results suggest that lagged crime rate and clear-up rate are correlated to all typologies of crime rate considered. Property crimes are better explained by socioeconomic variables (GDP per capita, GDP growth rate and percentage of population with high school and university degree), while demographic factors reveal important and significant influences, in particular for crimes against the person. These results are obtained using an instrumental variable approach that takes advantage of the dynamic properties of our dataset to control for both measurement errors in crime data and joint endogeneity of the explanatory variables
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
We show the appearance of spatiotemporal stochastic resonance in the Swift-Hohenberg equation. This phenomenon emerges when a control parameter varies periodically in time around the bifurcation point. By using general scaling arguments and by taking into account the common features occurring in a bifurcation, we outline possible manifestations of the phenomenon in other pattern-forming systems.
Resumo:
Outgoing radiation is introduced in the framework of the classical predictive electrodynamics using LorentzDiracs equation as a subsidiary condition. In a perturbative scheme in the charges the first radiative self-terms of the accelerations, momentum and angular momentum of a two charge system without external field are calculated.
Resumo:
A model of anisotropic fluid with three perfect fluid components in interaction is studied. Each fluid component obeys the stiff matter equation of state and is irrotational. The interaction is chosen to reproduce an integrable system of equations similar to the one associated to self-dual SU(2) gauge fields. An extension of the BelinskyZakharov version of the inverse scattering transform is presented and used to find soliton solutions to the coupled Einstein equations. A particular class of solutions that can be interpreted as lumps of matter propagating in empty space-time is examined.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this slight, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this light, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Background: Cardio-vascular diseases (CVD), their well established risk factors (CVRF) and mental disorders are common and co-occur more frequently than would be expected by chance. However, the pathogenic mechanisms and course determinants of both CVD and mental disorders have only been partially identified.Methods/Design: Comprehensive follow-up of CVRF and CVD with a psychiatric exam in all subjects who participated in the baseline cross-sectional CoLaus study (2003-2006) (n=6'738) which also included a comprehensive genetic assessment. The somatic investigation will include a shortened questionnaire on CVRF, CV events and new CVD since baseline and measurements of the same clinical and biological variables as at baseline. In addition, pro-inflammatory markers, persistent pain and sleep patterns and disorders will be assessed. In the case of a new CV event, detailed information will be abstracted from medical records. Similarly, data on the cause of death will be collected from the Swiss National Death Registry. The comprehensive psychiatric investigation of the CoLaus/PsyCoLaus study will use contemporary epidemiological methods including semi-structured diagnostic interviews, experienced clinical interviewers, standardized diagnostic criteria including threshold according to DSM-IV and sub-threshold syndromes and supplementary information on risk and protective factors for disorders. In addition, screening for objective cognitive impairment will be performed in participants older than 65 years.Discussion: The combined CoLaus/PsyCoLaus sample provides a unique opportunity to obtain prospective data on the interplay between CVRF/CVD and mental disorders, overcoming limitations of previous research by bringing together a comprehensive investigation of both CVRF and mental disorders as well as a large number of biological variables and a genome-wide genetic assessment in participants recruited from the general population.
Resumo:
Objecte: L'aplicació de la NIC 32 en les cooperatives ha generat una important controvèrsia en els últims anys. Fins al moment, s'han realitzat diversos treballs que intenten preveure els possibles efectes de la seva aplicació. Aquest treball pretén analitzar l'impacte de la primera aplicació de la NIC 32 en el sector cooperatiu. Disseny/metodologia/enfocament: S'ha seleccionat una mostra de 98 cooperatives, i s'ha realitzat una anàlisi comparativa de la seva informació financera presentada abans i després de l'aplicació de la NIC 32, per a determinar les diferències existents. S’ha utilitzat la prova de la suma de rangs de Wilcoxon per comprovar si aquestes diferències són significatives. També s’ha utilitzat la prova de la U de Mann Whitney per comprovar si existeixen diferències significatives en l’impacte relatiu de l’aplicació de la NIC 32 entre diversos grups de cooperatives. Finalment, s'ha realitzat una anàlisi dels efectes de l'aplicació de la NIC 32 en la situació patrimonial i econòmica de les cooperatives, i en l'evolució dels seus actius intangibles, mitjançant l’ús de tècniques d’anàlisi econòmico-financera. Aportacions i resultats: Els resultats obtinguts confirmen que l'aplicació de la NIC 32 provoca diferències significatives en algunes partides del balanç de situació i el compte de pèrdues i guanys, així com en les ràtios analitzades. Les principals diferències es concreten en una reducció del nivell de capitalització i un augment de l'endeutament de les cooperatives, així com un empitjorament general dels ràtios de solvència i autonomia financera. Limitacions: Cal tenir en compte que el treball s'ha realitzat amb una mostra de cooperatives que estan obligades a auditar els seus comptes anuals. Per tant, els resultats obtinguts han d'interpretar-se en un context de cooperatives de tamany elevat. També cal tenir en compte que hem realitzat una anàlisi comparativa dels comptes anuals de 2011 i 2010. Això ens ha permès conèixer les diferències en la informació financera de les cooperatives abans i després d'aplicar la NIC 32. Encara que algunes d’aquestes diferències també podrien estar causades per altres factors com la situació econòmica, els canvis en l'aplicació de les normes comptables, etc. Originalitat/valor afegit: Creiem que és el moment idoni per a realitzar aquest treball d'investigació, ja que des de 2011 totes les cooperatives espanyoles han d'aplicar les normes comptables adaptades a la NIC 32. A més, fins on coneixem, no existeixen altres treballs similars realitzats amb comptes anuals de cooperatives que ja han aplicat les normes comptables adaptades a la NIC 32 . Creiem que els resultats d'aquest treball d'investigació poden ser útils per a diferents grups d'interès. En primer lloc, perquè els organismes emissors de normes comptables puguin conèixer l'abast de la NIC 32 en les cooperatives i, puguin plantejar millores en el contingut de la norma. En segon lloc, perquè les pròpies cooperatives, federacions, confederacions i altres organismes cooperatius disposin d'informació sobre l'impacte econòmic de la primera aplicació de la NIC 32, i puguin realitzar les valoracions que creguin convenients. I en tercer lloc, perquè les entitats financeres, auditors i assessors de cooperatives i altres grups d'interès disposin d'informació sobre els canvis en els comptes anuals de les cooperatives, i puguin tenir-los en compte a l'hora de prendre decisions. Paraules clau: Cooperatives, patrimoni net, capital social, NIC 32, solvència, efectes de la normativa comptable, informació financera, ràtios.