139 resultados para statistical model for macromolecules
Resumo:
Objectives This study builds on research undertaken by Bernasco and Nieuwbeerta and explores the generalizability of a theoretically derived offender target selection model in three cross-national study regions. Methods Taking a discrete spatial choice approach, we estimate the impact of both environment- and offender-level factors on residential burglary placement in the Netherlands, the United Kingdom, and Australia. Combining cleared burglary data from all study regions in a single statistical model, we make statistical comparisons between environments. Results In all three study regions, the likelihood an offender selects an area for burglary is positively influenced by proximity to their home, the proportion of easily accessible targets, and the total number of targets available. Furthermore, in two of the three study regions, juvenile offenders under the legal driving age are significantly more influenced by target proximity than adult offenders. Post hoc tests indicate the magnitudes of these impacts vary significantly between study regions. Conclusions While burglary target selection strategies are consistent with opportunity-based explanations of offending, the impact of environmental context is significant. As such, the approach undertaken in combining observations from multiple study regions may aid criminology scholars in assessing the generalizability of observed findings across multiple environments.
Resumo:
Meta-analysis is a method to obtain a weighted average of results from various studies. In addition to pooling effect sizes, meta-analysis can also be used to estimate disease frequencies, such as incidence and prevalence. In this article we present methods for the meta-analysis of prevalence. We discuss the logit and double arcsine transformations to stabilise the variance. We note the special situation of multiple category prevalence, and propose solutions to the problems that arise. We describe the implementation of these methods in the MetaXL software, and present a simulation study and the example of multiple sclerosis from the Global Burden of Disease 2010 project. We conclude that the double arcsine transformation is preferred over the logit, and that the MetaXL implementation of multiple category prevalence is an improvement in the methodology of the meta-analysis of prevalence.
Resumo:
BACKGROUND This paper describes the first national burden of disease study for South Africa. The main focus is the burden due to premature mortality, i.e. years of life lost (YLLs). In addition, estimates of the burden contributed by morbidity, i.e. the years lived with disability (YLDs), are obtained to calculate disability-adjusted life years (DALYs); and the impact of AIDS on premature mortality in the year 2010 is assessed. METHOD Owing to the rapid mortality transition and the lack of timely data, a modelling approach has been adopted. The total mortality for the year 2000 is estimated using a demographic and AIDS model. The non-AIDS cause-of-death profile is estimated using three sources of data: Statistics South Africa, the National Department of Home Affairs, and the National Injury Mortality Surveillance System. A ratio method is used to estimate the YLDs from the YLL estimates. RESULTS The top single cause of mortality burden was HIV/AIDS followed by homicide, tuberculosis, road traffic accidents and diarrhoea. HIV/AIDS accounted for 38% of total YLLs, which is proportionately higher for females (47%) than for males (33%). Pre-transitional diseases, usually associated with poverty and underdevelopment, accounted for 25%, non-communicable diseases 21% and injuries 16% of YLLs. The DALY estimates highlight the fact that mortality alone underestimates the burden of disease, especially with regard to unintentional injuries, respiratory disease, and nervous system, mental and sense organ disorders. The impact of HIV/AIDS is expected to more than double the burden of premature mortality by the year 2010. CONCLUSION This study has drawn together data from a range of sources to develop coherent estimates of premature mortality by cause. South Africa is experiencing a quadruple burden of disease comprising the pre-transitional diseases, the emerging chronic diseases, injuries, and HIV/AIDS. Unless interventions that reduce morbidity and delay morbidity become widely available, the burden due to HIV/AIDS can be expected to grow very rapidly in the next few years. An improved base of information is needed to assess the morbidity impact more accurately.
Resumo:
Background Summarizing the epidemiology of major depressive disorder (MDD) at a global level is complicated by significant heterogeneity in the data. The aim of this study is to present a global summary of the prevalence and incidence of MDD, accounting for sources of bias, and dealing with heterogeneity. Findings are informing MDD burden quantification in the Global Burden of Disease (GBD) 2010 Study. Method A systematic review of prevalence and incidence of MDD was undertaken. Electronic databases Medline, PsycINFO and EMBASE were searched. Community-representative studies adhering to suitable diagnostic nomenclature were included. A meta-regression was conducted to explore sources of heterogeneity in prevalence and guide the stratification of data in a meta-analysis. Results The literature search identified 116 prevalence and four incidence studies. Prevalence period, sex, year of study, depression subtype, survey instrument, age and region were significant determinants of prevalence, explaining 57.7% of the variability between studies. The global point prevalence of MDD, adjusting for methodological differences, was 4.7% (4.4–5.0%). The pooled annual incidence was 3.0% (2.4–3.8%), clearly at odds with the pooled prevalence estimates and the previously reported average duration of 30 weeks for an episode of MDD. Conclusions Our findings provide a comprehensive and up-to-date profile of the prevalence of MDD globally. Region and study methodology influenced the prevalence of MDD. This needs to be considered in the GBD 2010 study and in investigations into the ecological determinants of MDD. Good-quality estimates from low-/middle-income countries were sparse. More accurate data on incidence are also required.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
Busway stations are the interface between passengers and services. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses manoeuvring into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. Further, some systems include operation where express buses do not observe the station, resulting in a proportion of non-stopping buses. It is important to understand the operation of the station under this type of operation and its effect on busway capacity. This study uses microscopic simulation to treat the busway station operation and to analyse the relationship between station potential capacity where all buses stop, and Mixed Potential Capacity where there is a mixture of stopping and non-stopping buses. First, the micro simulation technique is used to analyze the All Stopping Buses (ASB) scenario and then statistical model is tuned and calibrated for a specified range of controlled scenarios of dwell time characteristics Subsequently, a mathematical model is developed for Mixed Stopping Buses (MSB) Potential Capacity by introducing different proportions of express (or non-stopping) buses. The proposed models for a busway station bus capacity provide a better understanding of operation and are useful to transit agencies in busway planning, design and operation.
Resumo:
Busway stations are the interface between passengers and services. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses manoeuvring into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. Further, some systems include operation where express buses do not observe the station, resulting in a proportion of non-stopping buses. It is important to understand the operation of the station under this type of operation and its effect on busway capacity. This study uses microscopic simulation to treat the busway station operation and to analyse the relationship between station potential capacity where all buses stop, and Mixed Potential Capacity where there is a mixture of stopping and non-stopping buses. First, the micro simulation technique is used to analyze the All Stopping Buses (ASB) scenario and then statistical model is tuned and calibrated for a specified range of controlled scenarios of dwell time characteristics Subsequently, a mathematical model is developed for Mixed Stopping Buses (MSB) Potential Capacity by introducing different proportions of express (or non-stopping) buses. The proposed models for a busway station bus capacity provide a better understanding of operation and are useful to transit agencies in busway planning, design and operation.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.
Resumo:
Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations of a threatened species, the combined effects of differences in habitat quality, cost of action, and current subpopulation probability of extinction need to be integrated. We provide a useful guideline for allocating resources among isolated subpopulations of any threatened species. © 2010 by the Ecological Society of America.
Resumo:
The notion of being sure that you have completely eradicated an invasive species is fanciful because of imperfect detection and persistent seed banks. Eradication is commonly declared either on an ad hoc basis, on notions of seed bank longevity, or on setting arbitrary thresholds of 1% or 5% confidence that the species is not present. Rather than declaring eradication at some arbitrary level of confidence, we take an economic approach in which we stop looking when the expected costs outweigh the expected benefits. We develop theory that determines the number of years of absent surveys required to minimize the net expected cost. Given detection of a species is imperfect, the optimal stopping time is a trade-off between the cost of continued surveying and the cost of escape and damage if eradication is declared too soon. A simple rule of thumb compares well to the exact optimal solution using stochastic dynamic programming. Application of the approach to the eradication programme of Helenium amarum reveals that the actual stopping time was a precautionary one given the ranges for each parameter. © 2006 Blackwell Publishing Ltd/CNRS.
Resumo:
Threatened species often exist in a small number of isolated subpopulations. Given limitations on conservation spending, managers must choose from strategies that range from managing just one subpopulation and risking all other subpopulations to managing all subpopulations equally and poorly, thereby risking the loss of all subpopulations. We took an economic approach to this problem in an effort to discover a simple rule of thumb for optimally allocating conservation effort among subpopulations. This rule was derived by maximizing the expected number of extant subpopulations remaining given n subpopulations are actually managed. We also derived a spatiotemporally optimized strategy through stochastic dynamic programming. The rule of thumb suggested that more subpopulations should be managed if the budget increases or if the cost of reducing local extinction probabilities decreases. The rule performed well against the exact optimal strategy that was the result of the stochastic dynamic program and much better than other simple strategies (e.g., always manage one extant subpopulation or half of the remaining subpopulation). We applied our approach to the allocation of funds in 2 contrasting case studies: reduction of poaching of Sumatran tigers (Panthera tigris sumatrae) and habitat acquisition for San Joaquin kit foxes (Vulpes macrotis mutica). For our estimated annual budget for Sumatran tiger management, the mean time to extinction was about 32 years. For our estimated annual management budget for kit foxes in the San Joaquin Valley, the mean time to extinction was approximately 24 years. Our framework allows managers to deal with the important question of how to allocate scarce conservation resources among subpopulations of any threatened species. © 2008 Society for Conservation Biology.
Resumo:
Statistical comparison of oil samples is an integral part of oil spill identification, which deals with the process of linking an oil spill with its source of origin. In current practice, a frequentist hypothesis test is often used to evaluate evidence in support of a match between a spill and a source sample. As frequentist tests are only able to evaluate evidence against a hypothesis but not in support of it, we argue that this leads to unsound statistical reasoning. Moreover, currently only verbal conclusions on a very coarse scale can be made about the match between two samples, whereas a finer quantitative assessment would often be preferred. To address these issues, we propose a Bayesian predictive approach for evaluating the similarity between the chemical compositions of two oil samples. We derive the underlying statistical model from some basic assumptions on modeling assays in analytical chemistry, and to further facilitate and improve numerical evaluations, we develop analytical expressions for the key elements of Bayesian inference for this model. The approach is illustrated with both simulated and real data and is shown to have appealing properties in comparison with both standard frequentist and Bayesian approaches
Resumo:
Reconstructing 3D motion data is highly under-constrained due to several common sources of data loss during measurement, such as projection, occlusion, or miscorrespondence. We present a statistical model of 3D motion data, based on the Kronecker structure of the spatiotemporal covariance of natural motion, as a prior on 3D motion. This prior is expressed as a matrix normal distribution, composed of separable and compact row and column covariances. We relate the marginals of the distribution to the shape, trajectory, and shape-trajectory models of prior art. When the marginal shape distribution is not available from training data, we show how placing a hierarchical prior over shapes results in a convex MAP solution in terms of the trace-norm. The matrix normal distribution, fit to a single sequence, outperforms state-of-the-art methods at reconstructing 3D motion data in the presence of significant data loss, while providing covariance estimates of the imputed points.
Resumo:
Objective: The aim of this study was to explore whether there is a relationship between the degree of MR-defined inflammation using ultra small super-paramagnetic iron oxide (USPIO) particles, and biomechanical stress using finite element analysis (FEA) techniques, in carotid atheromatous plaques. Methods and Results: 18 patients with angiographically proven carotid stenoses underwent multi-sequence MR imaging before and 36 h after USPIO infusion. T2 * weighted images were manually segmented into quadrants and the signal change in each quadrant normalised to adjacent muscle was calculated after USPIO administration. Plaque geometry was obtained from the rest of the multi-sequence dataset and used within a FEA model to predict maximal stress concentration within each slice. Subsequently, a new statistical model was developed to explicitly investigate the form of the relationship between biomechanical stress and signal change. The Spearman's rank correlation coefficient for USPIO enhanced signal change and maximal biomechanical stress was -0.60 (p = 0.009). Conclusions: There is an association between biomechanical stress and USPIO enhanced MR-defined inflammation within carotid atheroma, both known risk factors for plaque vulnerability. This underlines the complex interaction between physiological processes and biomechanical mechanisms in the development of carotid atheroma. However, this is preliminary data that will need validation in a larger cohort of patients.
Resumo:
Submarine groundwater discharge (SGD) is an integral part of the hydrological cycle and represents an important aspect of land-ocean interactions. We used a numerical model to simulate flow and salt transport in a nearshore groundwater aquifer under varying wave conditions based on yearlong random wave data sets, including storm surge events. The results showed significant flow asymmetry with rapid response of influxes and retarded response of effluxes across the seabed to the irregular wave conditions. While a storm surge immediately intensified seawater influx to the aquifer, the subsequent return of intruded seawater to the sea, as part of an increased SGD, was gradual. Using functional data analysis, we revealed and quantified retarded, cumulative effects of past wave conditions on SGD including the fresh groundwater and recirculating seawater discharge components. The retardation was characterized well by a gamma distribution function regardless of wave conditions. The relationships between discharge rates and wave parameters were quantifiable by a regression model in a functional form independent of the actual irregular wave conditions. This statistical model provides a useful method for analyzing and predicting SGD from nearshore unconfined aquifers affected by random waves