927 resultados para overlap probability
Resumo:
Previous studies have enabled exact prediction of probabilities of identity-by-descent (IBD) in randommating populations for a few loci (up to four or so), with extension to more using approximate regression methods. Here we present a precise predictor of multiple-locus IBD using simple formulas based on exact results for two loci. In particular, the probability of non-IBD X ABC at each of ordered loci A, B, and C can be well approximated by XABC = XABXBC/XB and generalizes to X123. . .k = X12X23. . .Xk-1,k/ Xk-2, where X is the probability of non-IBD at each locus. Predictions from this chain rule are very precise with population bottlenecks and migration, but are rather poorer in the presence of mutation. From these coefficients, the probabilities of multilocus IBD and non-IBD can also be computed for genomic regions as functions of population size, time, and map distances. An approximate but simple recurrence formula is also developed, which generally is less accurate than the chain rule but is more robust with mutation. Used together with the chain rule it leads to explicit equations for non-IBD in a region. The results can be applied to detection of quantitative trait loci (QTL) by computing the probability of IBD at candidate loci in terms of identity-by-state at neighboring markers.
Resumo:
A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.
Resumo:
A whole-genome scan was conducted to map quantitative trait loci (QTL) for BSE resistance or susceptibility. Cows from four half-sib families were included and 173 microsatellite markers were used to construct a 2835-cM (Kosambi) linkage map covering 29 autosomes and the pseudoautosomal region of the sex chromosome. Interval mapping by linear regression was applied and extended to a multiple-QTL analysis approach that used identified QTL on other chromosomes as cofactors to increase mapping power. In the multiple-QTL analysis, two genome-wide significant QTL (BTA17 and X/Y ps) and four genome-wide suggestive QTL (BTA1, 6, 13, and 19) were revealed. The QTL identified here using linkage analysis do not overlap with regions previously identified using TDT analysis. One factor that may explain the disparity between the results is that a more extensive data set was used in the present study. Furthermore, methodological differences between TDT and linkage analyses may affect the power of these approaches.
Resumo:
A new deterministic method for predicting simultaneous inbreeding coefficients at three and four loci is presented. The method involves calculating the conditional probability of IBD (identical by descent) at one locus given IBD at other loci, and multiplying this probability by the prior probability of the latter loci being simultaneously IBD. The conditional probability is obtained applying a novel regression model, and the prior probability from the theory of digenic measures of Weir and Cockerham. The model was validated for a finite monoecious population mating at random, with a constant effective population size, and with or without selfing, and also for an infinite population with a constant intermediate proportion of selfing. We assumed discrete generations. Deterministic predictions were very accurate when compared with simulation results, and robust to alternative forms of implementation. These simultaneous inbreeding coefficients were more sensitive to changes in effective population size than in marker spacing. Extensions to predict simultaneous inbreeding coefficients at more than four loci are now possible.
Resumo:
The power of testing for a population-wide association between a biallelic quantitative trait locus and a linked biallelic marker locus is predicted both empirically and deterministically for several tests. The tests were based on the analysis of variance (ANOVA) and on a number of transmission disequilibrium tests (TDT). Deterministic power predictions made use of family information, and were functions of population parameters including linkage disequilibrium, allele frequencies, and recombination rate. Deterministic power predictions were very close to the empirical power from simulations in all scenarios considered in this study. The different TDTs had very similar power, intermediate between one-way and nested ANOVAs. One-way ANOVA was the only test that was not robust against spurious disequilibrium. Our general framework for predicting power deterministically can be used to predict power in other association tests. Deterministic power calculations are a powerful tool for researchers to plan and evaluate experiments and obviate the need for elaborate simulation studies.
Resumo:
The presence of arsenic in the environment is a hazard. The accumulation of arsenate by a range of cations in the formation of minerals provides a mechanism for the remediation of arsenate contamination. The formation of the crandallite group of minerals provides a mechanism for arsenate accumulation. Among the crandallite minerals are philipsbornite, arsenocrandallite and arsenogoyazite. Raman spectroscopy complimented with infrared spectroscopy has enabled aspects of the structure of philipsbornite to be studied. The Raman spectrum of philipsbornite displays an intense band at around 840 cm−1 attributed to the overlap of the symmetric and antisymmetric stretching modes. Raman bands observed at 325, 336, 347, 357, 376 and 399 cm−1 are assigned to the ν2 (AsO4)3− symmetric bending vibration (E) and to the ν4 bending vibration (F2). The observation of multiple bending modes supports the concept of a reduction in symmetry of the arsenate anion in philipsbornite. Evidence for phosphate in the mineral is provided. By using an empirical formula, hydrogen bond distances for the OH units in philipsbornite of 2.8648 Å, 2.7864 Å, 2.6896 Å cm−1 and 2.6220 were calculated.
Resumo:
The player experience is at the core of videogame play. Understanding the facets of player experience presents many research challenges, as the phenomenon sits at the intersection of psychology, design, human-computer interaction, sociology, and physiology. This workshop brings together an interdisciplinary group of researchers to systematically and rigorously analyse all aspects of the player experience. Methods and tools for conceptualising, operationalising and measuring the player experience form the core of this research. Our aim is to take a holistic approach to identifying, adapting and extending theories and models of the player experience, to understand how these theories and models interact, overlap and differ, and to construct a unified vision for future research.
Resumo:
Background Efficient effective child product safety (PS) responses require data on hazards, injury severity and injury probability. PS responses in Australia largely rely on reports from manufacturers/retailers, other jurisdictions/regulators, or consumers. The extent to which reactive responses reflect actual child injury priorities is unknown. Aims/Objectives/Purpose This research compared PS issues for children identified using data compiled from PS regulatory data and data compiled from health data sources in Queensland, Australia. Methods PS regulatory documents describing issues affecting children in Queensland in 2008–2009 were compiled and analysed to identify frequent products and hazards. Three health data sources (ED, injury surveillance and hospital data) were analysed to identify frequent products and hazards. Results/Outcomes Projectile toys/squeeze toys were the priority products for PS regulators with these toys having the potential to release small parts presenting choking hazards. However, across all health datasets, falls were the most common mechanism of injury, and several of the products identified were not subject to a PS system response. While some incidents may not require a response, a manual review of injury description text identified child poisonings and burns as common mechanisms of injuries in the health data where there was substantial documentation of product-involvement, yet only 10% of PS system responses focused on these two mechanisms combined. Significance/contribution to the field Regulatory data focused on products that fail compliance checks with ‘potential’ to cause harm, and health data identified actual harm, resulting in different prioritisation of products/mechanisms. Work is needed to better integrate health data into PS responses in Australia.
Resumo:
In March 2008, the Australian Government announced its intention to introduce a national Emissions Trading Scheme (ETS), now expected to start in 2015. This impending development provides an ideal setting to investigate the impact an ETS in Australia will have on the market valuation of Australian Securities Exchange (ASX) firms. This is the first empirical study into the pricing effects of the ETS in Australia. Primarily, we hypothesize that firm value will be negatively related to a firm's carbon intensity profile. That is, there will be a greater impact on firm value for high carbon emitters in the period prior (2007) to the introduction of the ETS, whether for reasons relating to the existence of unbooked liabilities associated with future compliance and/or abatement costs, or for reasons relating to reduced future earnings. Using a sample of 58 Australian listed firms (constrained by the current availability of emissions data) which comprise larger, more profitable and less risky listed Australian firms, we first undertake an event study focusing on five distinct information events argued to impact the probability of the proposed ETS being enacted. Here, we find direct evidence that the capital market is indeed pricing the proposed ETS. Second, using a modified version of the Ohlson (1995) valuation model, we undertake a valuation analysis designed not only to complement the event study results, but more importantly to provide insights into the capital market's assessment of the magnitude of the economic impact of the proposed ETS as reflected in market capitalization. Here, our results show that the market assesses the most carbon intensive sample firms a market value decrement relative to other sample firms of between 7% and 10% of market capitalization. Further, based on the carbon emission profile of the sample firms we imply a ‘future carbon permit price’ of between AUD$17 per tonne and AUD$26 per tonne of carbon dioxide emitted. This study is more precise than industry reports, which set a carbon price of between AUD$15 to AUD$74 per tonne.
Resumo:
Context: Anti-Müllerian hormone (AMH) concentration reflects ovarian aging and is argued to be a useful predictor of age at menopause (AMP). It is hypothesized that AMH falling below a critical threshold corresponds to follicle depletion, which results in menopause. With this threshold, theoretical predictions of AMP can be made. Comparisons of such predictions with observed AMP from population studies support the role for AMH as a forecaster of menopause. Objective: The objective of the study was to investigate whether previous relationships between AMH and AMP are valid using a much larger data set. Setting: AMH was measured in 27 563 women attending fertility clinics. Study Design: From these data a model of age-related AMH change was constructed using a robust regression analysis. Data on AMP from subfertile women were obtained from the population-based Prospect-European Prospective Investigation into Cancer and Nutrition (Prospect- EPIC) cohort (n � 2249). By constructing a probability distribution of age at which AMH falls below a critical threshold and fitting this to Prospect-EPIC menopausal age data using maximum likelihood, such a threshold was estimated. Main Outcome: The main outcome was conformity between observed and predicted AMP. Results: To get a distribution of AMH-predicted AMP that fit the Prospect-EPIC data, we found the critical AMH threshold should vary among women in such a way that women with low age-specific AMH would have lower thresholds, whereas women with high age-specific AMH would have higher thresholds (mean 0.075 ng/mL; interquartile range 0.038–0.15 ng/mL). Such a varying AMH threshold for menopause is a novel and biologically plausible finding. AMH became undetectable (�0.2 ng/mL) approximately 5 years before the occurrence of menopause, in line with a previous report. Conclusions: The conformity of the observed and predicted distributions of AMP supports the hypothesis that declining population averages of AMH are associated with menopause, making AMH an excellent candidate biomarker for AMP prediction. Further research will help establish the accuracy of AMH levels to predict AMP within individuals.
Resumo:
Modelling how a word is activated in human memory is an important requirement for determining the probability of recall of a word in an extra-list cueing experiment. Previous research assumed a quantum-like model in which the semantic network was modelled as entangled qubits, however the level of activation was clearly being over-estimated. This paper explores three variations of this model, each of which are distinguished by a scaling factor designed to compensate the overestimation.
Resumo:
Background: There is a well developed literature on research investigating the relationship between various driving behaviours and road crash involvement. However, this research has predominantly been conducted in developed economies dominated by western types of cultural environments. To date no research has been published that has empirically investigated this relationship within the context of the emerging economies such as Oman. Objective: The present study aims to investigate driving behaviour as indexed in the Driving Behaviour Questionnaire (DBQ) among a group of Omani university students and staff. Methods: A convenience non-probability self- selection sampling approach was utilized with Omani university students and staff. Results: A total of 1003 Omani students (n= 632) and staff (n=371) participated in the survey. Factor analysis of the BDQ revealed four main factors that were errors, speeding violation, lapses and aggressive violation. In the multivariate logistic backward regression analysis, the following factors were identified as significant predictors of being involved in causing at least one crash: driving experience, history of offences and two DBQ components i.e. errors and aggressive violation. Conclusion: This study indicates that errors and aggressive violation of the traffic regulations as well as history of having traffic offences are major risk factors for road traffic crashes among the sample. While previous international research has demonstrated that speeding is a primary cause of crashing, in the current context, the results indicate that an array of factors is associated with crashes. Further research using more rigorous methodology is warranted to inform the development of road safety countermeasures in Oman that improves overall traffic safety culture.
Resumo:
OBJECTIVE There has been a dramatic increase in vitamin D testing in Australia in recent years, prompting calls for targeted testing. We sought to develop a model to identify people most at risk of vitamin D deficiency. DESIGN AND PARTICIPANTS This is a cross-sectional study of 644 60- to 84-year-old participants, 95% of whom were Caucasian, who took part in a pilot randomized controlled trial of vitamin D supplementation. MEASUREMENTS Baseline 25(OH)D was measured using the Diasorin Liaison platform. Vitamin D insufficiency and deficiency were defined using 50 and 25 nmol/l as cut-points, respectively. A questionnaire was used to obtain information on demographic characteristics and lifestyle factors. We used multivariate logistic regression to predict low vitamin D and calculated the net benefit of using the model compared with 'test-all' and 'test-none' strategies. RESULTS The mean serum 25(OH)D was 42 (SD 14) nmol/1. Seventy-five per cent of participants were vitamin D insufficient and 10% deficient. Serum 25(OH)D was positively correlated with time outdoors, physical activity, vitamin D intake and ambient UVR, and inversely correlated with age, BMI and poor self-reported health status. These predictors explained approximately 21% of the variance in serum 25(OH)D. The area under the ROC curve predicting vitamin D deficiency was 0·82. Net benefit for the prediction model was higher than that for the 'test-all' strategy at all probability thresholds and higher than the 'test-none' strategy for probabilities up to 60%. CONCLUSION Our model could predict vitamin D deficiency with reasonable accuracy, but it needs to be validated in other populations before being implemented.
Resumo:
There are no population studies of prevalence or incidence of child maltreatment in Australia. Child protection data gives some understanding but is restricted by system capacity and definitional issues across jurisdictions. Child protection data currently suggests that numbers of reports are increasing yearly, and the child protection system then becomes focussed on investigating all reports and diluting available resources for those children who are most in need of intervention. A public health response across multiple agencies enables responses to child safety across the entire population. All families are targeted at the primary level; examples include ensuring all parents know the dangers of shaking a baby or teaching children to say no if a situation makes them uncomfortable. The secondary level of prevention targets families with a number of risk factors, for example subsidised child care so children aren't left unsupervised after school when both parents have to be at work or home visiting for drug-addicted parents to ensure children are cared for. The tertiary response then becomes the responsibility of the child protection system and is reserved for those children where abuse and neglect are identified. This model requires that child safety is seen in a broader context than just the child protection system, and increasingly health professionals are being identified as an important component in the public health framework. If all injury is viewed as preventable and considered along a continuum of 'accidental' through to 'inflicted', it becomes possible to conceptualise child maltreatment in an injury context. Parental intent may not be to cause harm to the child, but by lack of insight or concern about risk, the potential for injury is high. The mechanisms for unintentional and intentional injury overlap and some suggest that by segregating child abuse (with the possible exception of sexual abuse) from unintentional injury, child abuse is excluded from the broader injury prevention initiative that is gaining momentum in the community. This research uses a public health perspective, specifically that of injury prevention, to consider the problem of child abuse. This study employed a mixed method design that incorporates secondary data analysis, data linkage and structured interviews of different professional groups. Datasets from the Queensland Injury Surveillance Unit (QISU) and The Department of Child Safety (DCS) were evaluated. Coded injury data was grouped according to intent of injury according to those with a code that indicated the ED presentation was due to child abuse, a code indicating that the injury was possibly due to abuse or, in the third group, the intent code indicated that the injury was unintentional and not due to abuse. Primary data collection from ED records was undertaken and information recoded to assess reliability and completeness. Emergency department data (QISU) was linked to Department of Child Safety Data to examine concordance and data quality. Factors influencing the collection and collation of these data were identified through structured interview methodology and analysed using qualitative methods. Secondary analysis of QISU data indicated that codes lacking specific information on the injury event were more likely to also have an intent code indicating abuse than those records where there was specific information on the injury event. Codes for abuse appeared in only 1.2% of the 84,765 records analysed. Unintentional injury was the most commonly coded intent (95.3%). In the group with a definite abuse code assigned at triage, 83% linked to a record with DCS and cases where documentation indicated police involvement were significantly more likely to be associated with a DCS record than those without such documentation. In those coded with an unintentional injury code, 22% linked to a DCS record with cases assigned an urgent triage category more likely to link than those with a triage category for resuscitation and children who presented to regional or remote hospitals more likely to link to a DCS record than those presenting to urban hospitals. Twenty-nine per cent of cases with a code indicating possible abuse linked to a DCS record. In documentation that indicated police involvement in the case, a code for unspecified activity when compared to cases with a code indicating involvement in a sporting activity and children less than 12 months of age compared to those in the 13-17 year old age group were all variables significantly associated with linkage to a DCS record. Only 13% of records contained documentation indicating that child abuse and neglect were considered in the diagnosis of the injury despite almost half of the sample having a code of abuse or possible abuse. Doctors and nurses were confident in their knowledge of the process of reporting child maltreatment but less confident about identifying child abuse and neglect and what should be reported. Many were concerned about implications of reporting, for the child and family and for themselves. A number were concerned about the implications of not reporting, mostly for the wellbeing of the child and a few in terms of their legal obligations as mandatory reporters. The outcomes of this research will help improve the knowledge of barriers to effective surveillance of child abuse in emergency departments. This will, in turn, ensure better identification and reporting practises; more reliable official statistical collections and the potential of flagging high-risk cases to ensure adequate departmental responses have been initiated.
Resumo:
X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.