68 resultados para Small Area Estimation
em Queensland University of Technology - ePrints Archive
Resumo:
Background Multilevel and spatial models are being increasingly used to obtain substantive information on area-level inequalities in cancer survival. Multilevel models assume independent geographical areas, whereas spatial models explicitly incorporate geographical correlation, often via a conditional autoregressive prior. However the relative merits of these methods for large population-based studies have not been explored. Using a case-study approach, we report on the implications of using multilevel and spatial survival models to study geographical inequalities in all-cause survival. Methods Multilevel discrete-time and Bayesian spatial survival models were used to study geographical inequalities in all-cause survival for a population-based colorectal cancer cohort of 22,727 cases aged 20–84 years diagnosed during 1997–2007 from Queensland, Australia. Results Both approaches were viable on this large dataset, and produced similar estimates of the fixed effects. After adding area-level covariates, the between-area variability in survival using multilevel discrete-time models was no longer significant. Spatial inequalities in survival were also markedly reduced after adjusting for aggregated area-level covariates. Only the multilevel approach however, provided an estimation of the contribution of geographical variation to the total variation in survival between individual patients. Conclusions With little difference observed between the two approaches in the estimation of fixed effects, multilevel models should be favored if there is a clear hierarchical data structure and measuring the independent impact of individual- and area-level effects on survival differences is of primary interest. Bayesian spatial analyses may be preferred if spatial correlation between areas is important and if the priority is to assess small-area variations in survival and map spatial patterns. Both approaches can be readily fitted to geographically enabled survival data from international settings
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
A modified microstrip-fed planar monopole antenna with open circuited coupled line is presented in this paper. The operational bandwidth of the proposed antenna covers the 2.4 GHz ISM band (2.42-2.48 GHz) and the 5 GHz WLAN band (5 GHz to 6 GHz). The radiating elements occupy a small area of 23×8 mm2. The Finite Difference Time Domain method is used to predict the input impedance of the antenna. The calculated return loss shows very good agreement with measured data. Reasonable antenna gain is observed across the operating band. The measured radiation patterns are similar to those of a simple monopole antenna.
Resumo:
Aim: To quantify the consequences of major threats to biodiversity, such as climate and land-use change, it is important to use explicit measures of species persistence, such as extinction risk. The extinction risk of metapopulations can be approximated through simple models, providing a regional snapshot of the extinction probability of a species. We evaluated the extinction risk of three species under different climate change scenarios in three different regions of the Mexican cloud forest, a highly fragmented habitat that is particularly vulnerable to climate change. Location: Cloud forests in Mexico. Methods: Using Maxent, we estimated the potential distribution of cloud forest for three different time horizons (2030, 2050 and 2080) and their overlap with protected areas. Then, we calculated the extinction risk of three contrasting vertebrate species for two scenarios: (1) climate change only (all suitable areas of cloud forest through time) and (2) climate and land-use change (only suitable areas within a currently protected area), using an explicit patch-occupancy approximation model and calculating the joint probability of all populations becoming extinct when the number of remaining patches was less than five. Results: Our results show that the extent of environmentally suitable areas for cloud forest in Mexico will sharply decline in the next 70 years. We discovered that if all habitat outside protected areas is transformed, then only species with small area requirements are likely to persist. With habitat loss through climate change only, high dispersal rates are sufficient for persistence, but this requires protection of all remaining cloud forest areas. Main conclusions: Even if high dispersal rates mitigate the extinction risk of species due to climate change, the synergistic impacts of changing climate and land use further threaten the persistence of species with higher area requirements. Our approach for assessing the impacts of threats on biodiversity is particularly useful when there is little time or data for detailed population viability analyses. © 2013 John Wiley & Sons Ltd.
Resumo:
Background: Preventing risk factor exposure is vital to reduce the high burden from lung cancer. The leading risk factor for developing lung cancer is tobacco smoking. In Australia, despite apparent success in reducing smoking prevalence, there is limited information on small area patterns and small area temporal trends. We sought to estimate spatio-temporal patterns for lung cancer risk factors using routinely collected population-based cancer data. Methods: The analysis used a Bayesian shared component spatio-temporal model, with male and female lung cancer included separately. The shared component reflected exposure to lung cancer risk factors, and was modelled over 477 statistical local areas (SLAs) and 15 years in Queensland, Australia. Analyses were also run adjusting for area-level socioeconomic disadvantage, Indigenous population composition, or remoteness. Results: Strong spatial patterns were observed in the underlying risk factor exposure for both males (median Relative Risk (RR) across SLAs compared to the Queensland average ranged from 0.48-2.00) and females (median RR range across SLAs 0.53-1.80), with high exposure observed in many remote areas. Strong temporal trends were also observed. Males showed a decrease in the underlying risk across time, while females showed an increase followed by a decrease in the final two years. These patterns were largely consistent across each SLA. The high underlying risk estimates observed among disadvantaged, remote and indigenous areas decreased after adjustment, particularly among females. Conclusion: The modelled underlying exposure appeared to reflect previous smoking prevalence, with a lag period of around 30 years, consistent with the time taken to develop lung cancer. The consistent temporal trends in lung cancer risk factors across small areas support the hypothesis that past interventions have been equally effective across the state. However, this also means that spatial inequalities have remained unaddressed, highlighting the potential for future interventions, particularly among remote areas.
Resumo:
Particle swarm optimization (PSO), a new population based algorithm, has recently been used on multi-robot systems. Although this algorithm is applied to solve many optimization problems as well as multi-robot systems, it has some drawbacks when it is applied on multi-robot search systems to find a target in a search space containing big static obstacles. One of these defects is premature convergence. This means that one of the properties of basic PSO is that when particles are spread in a search space, as time increases they tend to converge in a small area. This shortcoming is also evident on a multi-robot search system, particularly when there are big static obstacles in the search space that prevent the robots from finding the target easily; therefore, as time increases, based on this property they converge to a small area that may not contain the target and become entrapped in that area.Another shortcoming is that basic PSO cannot guarantee the global convergence of the algorithm. In other words, initially particles explore different areas, but in some cases they are not good at exploiting promising areas, which will increase the search time.This study proposes a method based on the particle swarm optimization (PSO) technique on a multi-robot system to find a target in a search space containing big static obstacles. This method is not only able to overcome the premature convergence problem but also establishes an efficient balance between exploration and exploitation and guarantees global convergence, reducing the search time by combining with a local search method, such as A-star.To validate the effectiveness and usefulness of algorithms,a simulation environment has been developed for conducting simulation-based experiments in different scenarios and for reporting experimental results. These experimental results have demonstrated that the proposed method is able to overcome the premature convergence problem and guarantee global convergence.
Resumo:
Statistical analyses of health program participation seek to address a number of objectives compatible with the evaluation of demand for current resources. In this spirit, a spatial hierarchical model is developed for disentangling patterns in participation at the small area level, as a function of population-based demand and additional variation. For the former, a constrained gravity model is proposed to quantify factors associated with spatial choice and account for competition effects, for programs delivered by multiple clinics. The implications of gravity model misspecification within a mixed effects framework are also explored. The proposed model is applied to participation data from a no-fee mammography program in Brisbane, Australia. Attention is paid to the interpretation of various model outputs and their relevance for public health policy.
Resumo:
There is considerable scientific interest in personal exposure to ultrafine particles. Owing to their small size, these particles are able to penetrate deep into the lungs, where they may cause adverse respiratory, pulmonary and cardiovascular health effects. This article presents Bayesian hierarchical models for estimating and comparing inhaled particle surface area in the lung.
Resumo:
The accuracy of measurement of mechanical properties of a material using instrumented nanoindentation at extremely small penetration depths heavily relies on the determination of the contact area of the indenter. Our experiments have demonstrated that the conventional area function could lead to a significant error when the contact depth was below 40. nm, due to the singularity in the first derivation of the function in this region and thus, the resultant unreasonable sharp peak on the function curve. In this paper, we proposed a new area function that was used to calculate the contact area for the indentations where the contact depths varied from 10 to 40. nm. The experimental results have shown that the new area function has produced better results than the conventional function. © 2011 Elsevier B.V.
Resumo:
This paper proposes a new approach for state estimation of angles and frequencies of equivalent areas in large power systems with synchronized phasor measurement units. Defining coherent generators and their correspondent areas, generators are aggregated and system reduction is performed in each area of inter-connected power systems. The structure of the reduced system is obtained based on the characteristics of the reduced linear model and measurement data to form the non-linear model of the reduced system. Then a Kalman estimator is designed for the reduced system to provide an equivalent dynamic system state estimation using the synchronized phasor measurement data. The method is simulated on two test systems to evaluate the feasibility of the proposed method.
Resumo:
A new approach is proposed for obtaining a non-linear area-based equivalent model of power systems to express the inter-area oscillations using synchronised phasor measurements. The generators that remain coherent for inter-area disturbances over a wide range of operating conditions define the areas, and the reduced model is obtained by representing each area by an equivalent machine. The parameters of the reduced system are identified by processing the obtained measurements, and a non-linear Kalman estimator is then designed for the estimation of equivalent area angles and frequencies. The simulation of the approach on a two-area system shows substantial reduction of non-inter-area modes in the estimated angles. The proposed methods are also applied to a ten-machine system to illustrate the feasibility of the approach on larger and meshed networks.
Resumo:
The application of the Bluetooth (BT) technology to transportation has been enabling researchers to make accurate travel time observations, in freeway and arterial roads. The Bluetooth traffic data are generally incomplete, for they only relate to those vehicles that are equipped with Bluetooth devices, and that are detected by the Bluetooth sensors of the road network. The fraction of detected vehicles versus the total number of transiting vehicles is often referred to as Bluetooth Penetration Rate (BTPR). The aim of this study is to precisely define the spatio-temporal relationship between the quantities that become available through the partial, noisy BT observations; and the hidden variables that describe the actual dynamics of vehicular traffic. To do so, we propose to incorporate a multi- class traffic model into a Sequential Montecarlo Estimation algorithm. Our framework has been applied for the empirical travel time investigations into the Brisbane Metropolitan region.
Resumo:
Monitoring pedestrian and cyclists movement is an important area of research in transport, crowd safety, urban design and human behaviour assessment areas. Media Access Control (MAC) address data has been recently used as potential information for extracting features from people’s movement. MAC addresses are unique identifiers of WiFi and Bluetooth wireless technologies in smart electronics devices such as mobile phones, laptops and tablets. The unique number of each WiFi and Bluetooth MAC address can be captured and stored by MAC address scanners. MAC addresses data in fact allows for unannounced, non-participatory, and tracking of people. The use of MAC data for tracking people has been focused recently for applying in mass events, shopping centres, airports, train stations etc. In terms of travel time estimation, setting up a scanner with a big value of antenna’s gain is usually recommended for highways and main roads to track vehicle’s movements, whereas big gains can have some drawbacks in case of pedestrian and cyclists. Pedestrian and cyclists mainly move in built distinctions and city pathways where there is significant noises from other fixed WiFi and Bluetooth. Big antenna’s gains will cover wide areas that results in scanning more samples from pedestrians and cyclists’ MAC device. However, anomalies (such fixed devices) may be captured that increase the complexity and processing time of data analysis. On the other hand, small gain antennas will have lesser anomalies in the data but at the cost of lower overall sample size of pedestrian and cyclist’s data. This paper studies the effect of antenna characteristics on MAC address data in terms of travel-time estimation for pedestrians and cyclists. The results of the empirical case study compare the effects of small and big antenna gains in order to suggest optimal set up for increasing the accuracy of pedestrians and cyclists’ travel-time estimation.
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.