11 resultados para Random Coefficient Autoregressive Model{ RCAR (1)}
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.
Resumo:
The papers included in this thesis deal with a few aspects of insurance economics that have seldom been dealt with in the applied literature. In the first paper I apply for the first time the tools of the economics of crime to study the determinants of frauds, using data on Italian provinces. The contributions to the literature are manifold: -The price of insuring has a positive correlation with the propensity to defraud -Social norms constraint fraudulent behavior, but their strength is curtailed in economic downturns -I apply a simple extension of the Random Coefficient model, which allows for the presence of time invariant covariates and asymmetries in the impact of the regressors. The second paper assesses how the evolution of macro prudential regulation of insurance companies has been reflected in their equity price. I employ a standard event study methodology, deriving the definition of the “control” and “treatment” groups from what is implied by the regulatory framework. The main results are: -Markets care about the evolution of the legislation. Their perception has shifted from a first positive assessment of a possible implicit “too big to fail” subsidy to a more negative one related to its cost in terms of stricter capital requirement -The size of this phenomenon is positively related to leverage, size and on the geographical location of the insurance companies The third paper introduces a novel methodology to forecast non-life insurance premiums and profitability as function of macroeconomic variables, using the simultaneous equation framework traditionally employed macroeconometric models and a simple theoretical model of insurance pricing to derive a long term relationship between premiums, claims expenses and short term rates. The model is shown to provide a better forecast of premiums and profitability compared with the single equation specifications commonly used in applied analysis.
Resumo:
Many efforts have been devoting since last years to reduce uncertainty in hydrological modeling predictions. The principal sources of uncertainty are provided by input errors, for inaccurate rainfall prediction, and model errors, given by the approximation with which the water flow processes in the soil and river discharges are described. The aim of the present work is to develop a bayesian model in order to reduce the uncertainty in the discharge predictions for the Reno river. The ’a priori’ distribution function is given by an autoregressive model, while the likelihood function is provided by a linear equation which relates observed values of discharge in the past and hydrological TOPKAPI model predictions obtained by the rainfall predictions of the limited-area model COSMO-LAMI. The ’a posteriori’ estimations are provided throw a H∞ filter, because the statistical properties of estimation errors are not known. In this work a stationary and a dual adaptive filter are implemented and compared. Statistical analysis of estimation errors and the description of three case studies of flood events occurred during the fall seasons from 2003 to 2005 are reported. Results have also revealed that errors can be described as a markovian process only at a first approximation. For the same period, an ensemble of ’a posteriori’ estimations is obtained throw the COSMO-LEPS rainfall predictions, but the spread of this ’a posteriori’ ensemble is not enable to encompass observation variability. This fact is related to the building of the meteorological ensemble, whose spread reaches its maximum after 5 days. In the future the use of a new ensemble, COSMO–SREPS, focused on the first 3 days, could be helpful to enlarge the meteorogical and, consequently, the hydrological variability.
Resumo:
Questa tesi riguarda l'analisi delle trasmissioni ad ingranaggi e delle ruote dentate in generale, nell'ottica della minimizzazione delle perdite di energia. È stato messo a punto un modello per il calcolo della energia e del calore dissipati in un riduttore, sia ad assi paralleli sia epicicloidale. Tale modello consente di stimare la temperatura di equilibrio dell'olio al variare delle condizioni di funzionamento. Il calcolo termico è ancora poco diffuso nel progetto di riduttori, ma si è visto essere importante soprattutto per riduttori compatti, come i riduttori epicicloidali, per i quali la massima potenza trasmissibile è solitamente determinata proprio da considerazioni termiche. Il modello è stato implementato in un sistema di calcolo automatizzato, che può essere adattato a varie tipologie di riduttore. Tale sistema di calcolo consente, inoltre, di stimare l'energia dissipata in varie condizioni di lubrificazione ed è stato utilizzato per valutare le differenze tra lubrificazione tradizionale in bagno d'olio e lubrificazione a “carter secco” o a “carter umido”. Il modello è stato applicato al caso particolare di un riduttore ad ingranaggi a due stadi: il primo ad assi paralleli ed il secondo epicicloidale. Nell'ambito di un contratto di ricerca tra il DIEM e la Brevini S.p.A. di Reggio Emilia, sono state condotte prove sperimentali su un prototipo di tale riduttore, prove che hanno consentito di tarare il modello proposto [1]. Un ulteriore campo di indagine è stato lo studio dell’energia dissipata per ingranamento tra due ruote dentate utilizzando modelli che prevedano il calcolo di un coefficiente d'attrito variabile lungo il segmento di contatto. I modelli più comuni, al contrario, si basano su un coefficiente di attrito medio, mentre si può constatare che esso varia sensibilmente durante l’ingranamento. In particolare, non trovando in letteratura come varia il rendimento nel caso di ruote corrette, ci si è concentrati sul valore dell'energia dissipata negli ingranaggi al variare dello spostamento del profilo. Questo studio è riportato in [2]. È stata condotta una ricerca sul funzionamento di attuatori lineari vite-madrevite. Si sono studiati i meccanismi che determinano le condizioni di usura dell'accoppiamento vite-madrevite in attuatori lineari, con particolare riferimento agli aspetti termici del fenomeno. Si è visto, infatti, che la temperatura di contatto tra vite e chiocciola è il parametro più critico nel funzionamento di questi attuatori. Mediante una prova sperimentale, è stata trovata una legge che, data pressione, velocità e fattore di servizio, stima la temperatura di esercizio. Di tale legge sperimentale è stata data un'interpretazione sulla base dei modelli teorici noti. Questo studio è stato condotto nell'ambito di un contratto di ricerca tra il DIEM e la Ognibene Meccanica S.r.l. di Bologna ed è pubblicato in [3].
Resumo:
The present dissertation focuses on burnout and work engagement among teachers, with especial focus on the Job-Demands Resources Model: Chapter 1 focuses on teacher burnout. It aims to investigate the role of efficacy beliefs using negatively worded inefficacy items instead of positive ones and to establish whether depersonalization and cynism can be considered two different dimensions of the teacher burnout syndrome. Chapter 2 investigates the factorial validity of the instruments used to measure work engagement (i.e. Utrecht Work Engagement Scale, UWES-17 and UWES-9). Moreover, because the current study is partly longitudinal in nature, also the stability across time of engagement can be investigated. Finally, based on cluster-analyses, two groups that differ in levels of engagement are compared as far as their job- and personal resources (i.e. possibilities for personal development, work-life balance, and self-efficacy), positive organizational attitudes and behaviours (i.e., job satisfaction and organizational citizenship behaviour) and perceived health are concerned. Chapter 3 tests the JD-R model in a longitudinal way, by integrating also the role of personal resources (i.e. self-efficacy). This chapter seeks answers to questions on what are the most important job demands, job and personal resources contributing to discriminate burned-out teachers from non-burned-out teachers, as well as engaged teachers from non-engaged teachers. Chapter 4 uses a diary study to extend knowledge about the dynamic nature of the JD-R model by considering between- and within-person variations with regard to both motivational and health impairment processes.
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
This thesis analysis micro and macro aspect of applied fiscal policy issues. The first chapter investigates the extent to which local budget spending composition reacts to fiscal rules variations. I consider the budget of Italian municipalities and exploit specific changes in the Domestic Stability Pact’s rules, to perform a difference-in-discontinuities analysis. The results show that imposing a cap on the total amount of consumption and investment is not as binding as two caps, one for consumption and a different one for investment. More specifically, consumption is triggered by changes in wages and services spending, while investment relies on infrastructure movements. In addition, there is evidence that when an increase in investment is achieved, there is also a higher budget deficit level. The second chapter intends to analyze the extent to which fiscal policy shocks are able to affect macrovariables during business cycle fluctuations, differentiating among three intervention channels: public taxation, consumption and investment. The econometric methodology implemented is a Panel Vector Autoregressive model with a structural characterization. The results show that fiscal shocks have different multipliers in relation to expansion or contraction periods: output does not react during good times while there are significant effects in bad ones. The third chapter evaluates the effects of fiscal policy announcements by the Italian government on the long-term sovereign bond spread of Italy relative to Germany. After collecting data on relevant fiscal policy announcements, we perform an econometric comparative analysis between the three cabinets that followed one another during the period 2009-2013. The results suggest that only fiscal policy announcements made by members of Monti’s cabinet have been effective in influencing significantly the Italian spread in the expected direction, revealing a remarkable credibility gap between Berlusconi’s and Letta’s governments with respect to Monti’s administration.
Resumo:
Today’s pet food industry is growing rapidly, with pet owners demanding high-quality diets for their pets. The primary role of diet is to provide enough nutrients to meet metabolic requirements, while giving the consumer a feeling of well-being. Diet nutrient composition and digestibility are of crucial importance for health and well being of animals. A recent strategy to improve the quality of food is the use of “nutraceuticals” or “Functional foods”. At the moment, probiotics and prebiotics are among the most studied and frequently used functional food compounds in pet foods. The present thesis reported results from three different studies. The first study aimed to develop a simple laboratory method to predict pet foods digestibility. The developed method was based on the two-step multi-enzymatic incubation assay described by Vervaeke et al. (1989), with some modification in order to better represent the digestive physiology of dogs. A trial was then conducted to compare in vivo digestibility of pet-foods and in vitro digestibility using the newly developed method. Correlation coefficients showed a close correlation between digestibility data of total dry matter and crude protein obtained with in vivo and in vitro methods (0.9976 and 0.9957, respectively). Ether extract presented a lower correlation coefficient, although close to 1 (0.9098). Based on the present results, the new method could be considered as an alternative system of evaluation of dog foods digestibility, reducing the need for using experimental animals in digestibility trials. The second parte of the study aimed to isolate from dog faeces a Lactobacillus strain capable of exert a probiotic effect on dog intestinal microflora. A L. animalis strain was isolated from the faeces of 17 adult healthy dogs..The isolated strain was first studied in vitro when it was added to a canine faecal inoculum (at a final concentration of 6 Log CFU/mL) that was incubated in anaerobic serum bottles and syringes which simulated the large intestine of dogs. Samples of fermentation fluid were collected at 0, 4, 8, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms, clostridia). Consequently, the L. animalis strain was fed to nine dogs having lactobacilli counts lower than 4.5 Log CFU per g of faeces. The study indicated that the L animalis strain was able to survive gastrointestinal passage and transitorily colonize the dog intestine. Both in vitro and in vivo results showed that the L. animalis strain positively influenced composition and metabolism of the intestinal microflora of dogs. The third trail investigated in vitro the effects of several non-digestible oligosaccharides (NDO) on dog intestinal microflora composition and metabolism. Substrates were fermented using a canine faecal inoculum that was incubated in anaerobic serum bottles and syringes. Substrates were added at the final concentration of 1g/L (inulin, FOS, pectin, lactitol, gluconic acid) or 4g/L (chicory). Samples of fermentation fluid were collected at 0, 6, and 24 hours for analysis (ammonia, SCFA, pH, lactobacilli, enterococci, coliforms). Gas production was measured throughout the 24 h of the study. Among the tested NDO lactitol showed the best prebiotic properties. In fact, it reduced coliforms and increased lactobacilli counts, enhanced microbial fermentation and promoted the production of SCFA while decreasing BCFA. All the substrates that were investigated showed one or more positive effects on dog faecal microflora metabolism or composition. Further studies (in particular in vivo studies with dogs) will be needed to confirm the prebiotic properties of lactitol and evaluate its optimal level of inclusion in the diet.
Resumo:
Il pomodoro è una delle colture principali del panorama agro-alimentare italiano e rappresenta un ingrediente base della tradizione culinaria nazionale. Il pomodoro lavorato dall’industria conserviera può essere trasformato in diverse tipologie merceologiche, che si differenziano in base alla tecniche di lavorazione impiegate ed alle caratteristiche del prodotto finito. la percentuale di spesa totale destinata all’acquisto di cibo fuori casa è in aumento a livello globale e l’interesse dell’industria alimentare nei confronti di questo canale di vendita è quindi crescente. Mentre sono numerose le indagine in letteratura che studiano i processi di acquisto dei consumatori finali, non ci sono evidenze di studi simili condotti sugli operatori del Food Service. Obiettivo principale della ricerca è quello di valutare le preferenze dei responsabili acquisti del settore Food Service per diverse tipologie di pomodoro trasformato, in relazione ad una gamma di attributi rilevanti del prodotto e di caratteristiche del cliente. La raccolta dei dati è avvenuta attraverso un esperimento di scelta ipotetico realizzato in Italia e alcuni mercati esteri. Dai risultati ottenuti dall’indagine emerge che i Pelati sono la categoria di pomodoro trasformato preferita dai responsabili degli acquisti del settore Food Service intervistati, con il 35% delle preferenze dichiarate nell'insieme dei contesti di scelta proposti, seguita dalla Polpa (25%), dalla Passata (20%) e dal Concentrato (15%). Dai risultati ottenuti dalla stima del modello econometrico Logit a parametri randomizzati è emerso che alcuni attributi qualitativi di fiducia (credence), spesso impiegati nelle strategie di differenziazione e posizionamento da parte dell’industria alimentare nel mercato Retail, possono rivestire un ruolo importante anche nell’influenzare le preferenze degli operatori del Food Service. Questo potrebbe quindi essere un interessante filone di ricerca da sviluppare nel futuro, possibilmente con l'impiego congiunto di metodologie di analisi basate su esperimenti di scelta ipotetici e non ipotetici.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.