111 resultados para Geographic Regression Discontinuity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Malaria has been a heavy social and health burden in the remote and poor areas in southern China. Analyses of malaria epidemic patterns can uncover important features of malaria transmission. This study identified spatial clusters, seasonal patterns, and geographic variations of malaria deaths at a county level in Yunnan, China, during 1991–2010. A discrete Poisson model was used to identify purely spatial clusters of malaria deaths. Logistic regression analysis was performed to detect changes in geographic patterns. The results show that malaria mortality had declined in Yunnan over the study period and the most likely spatial clusters (relative risk [RR] = 23.03–32.06, P < 0.001) of malaria deaths were identified in western Yunnan along the China–Myanmar border. The highest risk of malaria deaths occurred in autumn (RR = 58.91, P < 0.001) and summer (RR = 31.91, P < 0.001). The results suggested that the geographic distribution of malaria deaths was significantly changed with longitude, which indicated there was decreased mortality of malaria in eastern areas over the last two decades, although there was no significant change in latitude during the same period. Public health interventions should target populations in western Yunnan along border areas, especially focusing on floating populations crossing international borders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION Dengue fever (DF) in Vietnam remains a serious emerging arboviral disease, which generates significant concerns among international health authorities. Incidence rates of DF have increased significantly during the last few years in many provinces and cities, especially Hanoi. The purpose of this study was to detect DF hot spots and identify the disease dynamics dispersion of DF over the period between 2004 and 2009 in Hanoi, Vietnam. METHODS Daily data on DF cases and population data for each postcode area of Hanoi between January 1998 and December 2009 were obtained from the Hanoi Center for Preventive Health and the General Statistic Office of Vietnam. Moran's I statistic was used to assess the spatial autocorrelation of reported DF. Spatial scan statistics and logistic regression were used to identify space-time clusters and dispersion of DF. RESULTS The study revealed a clear trend of geographic expansion of DF transmission in Hanoi through the study periods (OR 1.17, 95% CI 1.02-1.34). The spatial scan statistics showed that 6/14 (42.9%) districts in Hanoi had significant cluster patterns, which lasted 29 days and were limited to a radius of 1,000 m. The study also demonstrated that most DF cases occurred between June and November, during which the rainfall and temperatures are highest. CONCLUSIONS There is evidence for the existence of statistically significant clusters of DF in Hanoi, and that the geographical distribution of DF has expanded over recent years. This finding provides a foundation for further investigation into the social and environmental factors responsible for changing disease patterns, and provides data to inform program planning for DF control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual localization in outdoor environments is often hampered by the natural variation in appearance caused by such things as weather phenomena, diurnal fluctuations in lighting, and seasonal changes. Such changes are global across an environment and, in the case of global light changes and seasonal variation, the change in appearance occurs in a regular, cyclic manner. Visual localization could be greatly improved if it were possible to predict the appearance of a particular location at a particular time, based on the appearance of the location in the past and knowledge of the nature of appearance change over time. In this paper, we investigate whether global appearance changes in an environment can be learned sufficiently to improve visual localization performance. We use time of day as a test case, and generate transformations between morning and afternoon using sample images from a training set. We demonstrate the learned transformation can be generalized from training data and show the resulting visual localization on a test set is improved relative to raw image comparison. The improvement in localization remains when the area is revisited several weeks later.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human infection with a novel low pathogenicity influenza A(H7N9) virus in eastern China has recently raised global public health concerns (1). The geographic sources of infection have yet to be fully clarified, and confirmed human cases from 1 province have not been linked to those from other provinces. While some studies have identified epidemiologic characteristics of subtype H7N9 cases and clinical differences between these cases and cases of highly pathogenic influenza A(H5N1), another avian influenza affecting parts of China (2–4), the spatial epidemiology of human infection with influenza subtypes H7N9 and H5N1 in China has yet to be elucidated. To test the hypothesis of co-distribution of high-risk clusters of both types of infection, we used all available data on human cases in mainland China and investigated the geospatial epidemiologic features...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper develops a semiparametric estimation approach for mixed count regression models based on series expansion for the unknown density of the unobserved heterogeneity. We use the generalized Laguerre series expansion around a gamma baseline density to model unobserved heterogeneity in a Poisson mixture model. We establish the consistency of the estimator and present a computational strategy to implement the proposed estimation techniques in the standard count model as well as in truncated, censored, and zero-inflated count regression models. Monte Carlo evidence shows that the finite sample behavior of the estimator is quite good. The paper applies the method to a model of individual shopping behavior. © 1999 Elsevier Science S.A. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We ascertained villagers’ perceptions about the importance of forests for their livelihoods and health through 1,837 reliably answered interviews of mostly male respondents from 185 villages in Indonesian and Malaysian Borneo. Variation in these perceptions related to several environmental and social variables, as shown in classification and regression analyses. Overall patterns indicated that forest use and cultural values are highest among people on Borneo who live close to remaining forest, and especially among older Christian residents. Support for forest clearing depended strongly on the scale at which deforestation occurs. Deforestation for small-scale agriculture was generally considered to be positive because it directly benefits people’s welfare. Large-scale deforestation (e.g., for industrial oil palm or acacia plantations), on the other hand, appeared to be more context-dependent, with most respondents considering it to have overall negative impacts on them, but with people in some areas considering the benefits to outweigh the costs. The interviews indicated high awareness of negative environmental impacts of deforestation, with high levels of concern over higher temperatures, air pollution and loss of clean water sources. Our study is unique in its geographic and trans-national scale. Our findings enable the development of maps of forest use and perceptions that could inform land use planning at a range of scales. Incorporating perspectives such as these could significantly reduce conflict over forest resources and ultimately result in more equitable development processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To enhance the efficiency of regression parameter estimation by modeling the correlation structure of correlated binary error terms in quantile regression with repeated measurements, we propose a Gaussian pseudolikelihood approach for estimating correlation parameters and selecting the most appropriate working correlation matrix simultaneously. The induced smoothing method is applied to estimate the covariance of the regression parameter estimates, which can bypass density estimation of the errors. Extensive numerical studies indicate that the proposed method performs well in selecting an accurate correlation structure and improving regression parameter estimation efficiency. The proposed method is further illustrated by analyzing a dental dataset.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.