868 resultados para Bayesian hierarchical models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many studies have shown relationships between air pollution and the rate of hospital admissions for asthma. A few studies have controlled for age-specific effects by adding separate smoothing functions for each age group. However, it has not yet been reported whether air pollution effects are significantly different for different age groups. This lack of information is the motivation for this study, which tests the hypothesis that air pollution effects on asthmatic hospital admissions are significantly different by age groups. Each air pollutant's effect on asthmatic hospital admissions by age groups was estimated separately. In this study, daily time-series data for hospital admission rates from seven cities in Korea from June 1999 through 2003 were analyzed. The outcome variable, daily hospital admission rates for asthma, was related to five air pollutants which were used as the independent variables, namely particulate matter <10 micrometers (μm) in aerodynamic diameter (PM10), carbon monoxide (CO), ozone (O3), nitrogen dioxide (NO2), and sulfur dioxide (SO2). Meteorological variables were considered as confounders. Admission data were divided into three age groups: children (<15 years of age), adults (ages 15-64), and elderly (≥ 65 years of age). The adult age group was considered to be the reference group for each city. In order to estimate age-specific air pollution effects, the analysis was separated into two stages. In the first stage, Generalized Additive Models (GAMs) with cubic spline for smoothing were applied to estimate the age-city-specific air pollution effects on asthmatic hospital admission rates by city and age group. In the second stage, the Bayesian Hierarchical Model with non-informative prior which has large variance was used to combine city-specific effects by age groups. The hypothesis test showed that the effects of PM10, CO and NO2 were significantly different by age groups. Assuming that the air pollution effect for adults is zero as a reference, age-specific air pollution effects were: -0.00154 (95% confidence interval(CI)= (-0.0030,-0.0001)) for children and 0.00126 (95% CI = (0.0006, 0.0019)) for the elderly for PM 10; -0.0195 (95% CI = (-0.0386,-0.0004)) for children for CO; and 0.00494 (95% CI = (0.0028, 0.0071)) for the elderly for NO2. Relative rates (RRs) were 1.008 (95% CI = (1.000-1.017)) in adults and 1.021 (95% CI = (1.012-1.030)) in the elderly for every 10 μg/m3 increase of PM10 , 1.019 (95% CI = (1.005-1.033)) in adults and 1.022 (95% CI = (1.012-1.033)) in the elderly for every 0.1 part per million (ppm) increase of CO; 1.006 (95%CI = (1.002-1.009)) and 1.019 (95%CI = (1.007-1.032)) in the elderly for every 1 part per billion (ppb) increase of NO2 and SO2, respectively. Asthma hospital admissions were significantly increased for PM10 and CO in adults, and for PM10, CO, NO2 and SO2 in the elderly.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiple hierarchical models of representative democracies in which, for instance, voters elect county representatives, county representatives elect district representatives, district representatives elect state representatives and state representatives a president, reduces the number of electors a representative is answerable for, and therefore, considering each level separately, these models could come closer to direct democracy. In this paper we show that worst case policy bias increases with the number of hierarchical levels. This also means that the opportunities of a gerrymanderer increase in the number of hierarchical levels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stable isotope analysis has emerged as one of the primary means for examining the structure and dynamics of food webs, and numerous analytical approaches are now commonly used in the field. Techniques range from simple, qualitative inferences based on the isotopic niche, to Bayesian mixing models that can be used to characterize food-web structure at multiple hierarchical levels. We provide a comprehensive review of these techniques, and thus a single reference source to help identify the most useful approaches to apply to a given data set. We structure the review around four general questions: (1) what is the trophic position of an organism in a food web?; (2) which resource pools support consumers?; (3) what additional information does relative position of consumers in isotopic space reveal about food-web structure?; and (4) what is the degree of trophic variability at the intrapopulation level? For each general question, we detail different approaches that have been applied, discussing the strengths and weaknesses of each. We conclude with a set of suggestions that transcend individual analytical approaches, and provide guidance for future applications in the field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.

The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.

The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.

Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual) using hierarchical models that assumed a Poisson, negative binomial (NB) or zero-inflated negative binomial (ZINB) count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1) in the wrong direction, or (2) with credible intervals that excluded the true trend, as compared with hourly and daily counts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The impact of climate change on the health of vulnerable groups such as the elderly has been of increasing concern. However, to date there has been no meta-analysis of current literature relating to the effects of temperature fluctuations upon mortality amongst the elderly. We synthesised risk estimates of the overall impact of daily mean temperature on elderly mortality across different continents. A comprehensive literature search was conducted using MEDLINE and PubMed to identify papers published up to December 2010. Selection criteria including suitable temperature indicators, endpoints, study-designs and identification of threshold were used. A two-stage Bayesian hierarchical model was performed to summarise the percent increase in mortality with a 1°C temperature increase (or decrease) with 95% confidence intervals in hot (or cold) days, with lagged effects also measured. Fifteen studies met the eligibility criteria and almost 13 million elderly deaths were included in this meta-analysis. In total, there was a 2-5% increase for a 1°C increment during hot temperature intervals, and a 1-2 % increase in all-cause mortality for a 1°C decrease during cold temperature intervals. Lags of up to 9 days in exposure to cold temperature intervals were substantially associated with all-cause mortality, but no substantial lagged effects were observed for hot intervals. Thus, both hot and cold temperatures substantially increased mortality among the elderly, but the magnitude of heat-related effects seemed to be larger than that of cold effects within a global context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background We investigated the geographical variation of water supply and sanitation indicators (WS&S) and their role to the risk of schistosomiasis and hookworm infection in school age children in West Africa. The aim was to predict large-scale geographical variation in WS&S, quantify the attributable risk of S. haematobium, S. mansoni and hookworm infections due to WS&S and identify communities where sustainable transmission control could be targeted across the region. Methods National cross-sectional household-based demographic health surveys were conducted in 24,542 households in Burkina Faso, Ghana and Mali, in 2003–2006. We generated spatially-explicit predictions of areas without piped water, toilet facilities and finished floors in West Africa, adjusting for household covariates. Using recently published helminth prevalence data we developed Bayesian geostatistical models (MGB) of S. haematobium, S. mansoni and hookworm infection in West Africa including environmental and the mapped outputs for WS&S. Using these models we estimated the effect of WS&S on parasite risk, quantified their attributable fraction of infection, and mapped the risk of infection in West Africa. Findings Our maps show that most areas in West Africa are very poorly served by water supply except in major urban centers. There is a better geographical coverage for toilet availability and improved household flooring. We estimated smaller attributable risks for water supply in S. mansoni (47%) compared to S. haematobium (71%), and 5% of hookworm cases could be averted by improving sanitation. Greater levels of inadequate sanitation increased the risk of schistosomiasis, and increased levels of unsafe water supply increased the risk of hookworm. The role of floor type for S. haematobium infection (21%) was comparable to that of S. mansoni (16%), but was significantly higher for hookworm infection (86%). S. haematobium and hookworm maps accounting for WS&S show small clusters of maximal prevalence areas in areas bordering Burkina Faso and Mali smaller. The map of S. mansoni shows that this parasite is much more wide spread across the north of the Niger River basin than previously predicted. Interpretation Our maps identify areas where the Millennium Development Goal for water and sanitation is lagging behind. Our results show that WS&S are important contributors to the burden of major helminth infections of children in West Africa. Including information about WS&S as well as the “traditional” environmental risk factors in spatial models of helminth risk yielded a substantial gain both in model fit and at explaining the proportion of spatial variance in helminth risk. Mapping the distribution of infection risk adjusted for WS&S allowed the identification of communities in West Africa where integrative preventive chemotherapy and engineering interventions will yield the greatest public health benefits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The effect of extreme temperature has become an increasing public health concern. Evaluating the impact of ambient temperature on morbidity has received less attention than its impact on mortality. METHODS: We performed a systematic literature review and extracted quantitative estimates of the effects of hot temperatures on cardiorespiratory morbidity. There were too few studies on effects of cold temperatures to warrant a summary. Pooled estimates of effects of heat were calculated using a Bayesian hierarchical approach that allowed multiple results to be included from the same study, particularly results at different latitudes and with varying lagged effects. RESULTS: Twenty-one studies were included in the final meta-analysis. The pooled results suggest an increase of 3.2% (95% posterior interval = -3.2% to 10.1%) in respiratory morbidity with 1°C increase on hot days. No apparent association was observed for cardiovascular morbidity (-0.5% [-3.0% to 2.1%]). The length of lags had inconsistent effects on the risk of respiratory and cardiovascular morbidity, whereas latitude had little effect on either. CONCLUSIONS: The effects of temperature on cardiorespiratory morbidity seemed to be smaller and more variable than previous findings related to mortality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study proposes a framework of a model-based hot spot identification method by applying full Bayes (FB) technique. In comparison with the state-of-the-art approach [i.e., empirical Bayes method (EB)], the advantage of the FB method is the capability to seamlessly integrate prior information and all available data into posterior distributions on which various ranking criteria could be based. With intersection crash data collected in Singapore, an empirical analysis was conducted to evaluate the following six approaches for hot spot identification: (a) naive ranking using raw crash data, (b) standard EB ranking, (c) FB ranking using a Poisson-gamma model, (d) FB ranking using a Poisson-lognormal model, (e) FB ranking using a hierarchical Poisson model, and (f) FB ranking using a hierarchical Poisson (AR-1) model. The results show that (a) when using the expected crash rate-related decision parameters, all model-based approaches perform significantly better in safety ranking than does the naive ranking method, and (b) the FB approach using hierarchical models significantly outperforms the standard EB approach in correctly identifying hazardous sites.