997 resultados para GLAUCOMA PROBABILITY SCORE


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anticipating the number and identity of bidders has significant influence in many theoretical results of the auction itself and bidders’ bidding behaviour. This is because when a bidder knows in advance which specific bidders are likely competitors, this knowledge gives a company a head start when setting the bid price. However, despite these competitive implications, most previous studies have focused almost entirely on forecasting the number of bidders and only a few authors have dealt with the identity dimension qualitatively. Using a case study with immediate real-life applications, this paper develops a method for estimating every potential bidder’s probability of participating in a future auction as a function of the tender economic size removing the bias caused by the contract size opportunities distribution. This way, a bidder or auctioner will be able to estimate the likelihood of a specific group of key, previously identified bidders in a future tender.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Glaucoma is the second leading cause of blindness worldwide. It is a group of optic neuropathies, characterized by progressive optic nerve degeneration, excavation of the optic disc due to apoptosis of retinal ganglion cells and corresponding visual field defects. Open angle glaucoma (OAG) is a subtype of glaucoma, classified according to the age of onset into juvenile and adult- forms with a cut-off point of 40 years of age. The prevalence of OAG is 1-2% of the population over 40 years and increases with age. During the last decade several candidate loci and three candidate genes, myocilin (MYOC), optineurin (OPTN) and WD40-repeat 36 (WDR36), for OAG have been identified. Exfoliation syndrome (XFS), age, elevated intraocular pressure and genetic predisposition are known risk factors for OAG. XFS is characterized by accumulation of grayish scales of fibrillogranular extracellular material in the anterior segment of the eye. XFS is overall the most common identifiable cause of glaucoma (exfoliation glaucoma, XFG). In the past year, three single nucleotide polymorphisms (SNPs) on the lysyl oxidase like 1 (LOXL1) gene have been associated with XFS and XFG in several populations. This thesis describes the first molecular genetic studies of OAG and XFS/XFG in the Finnish population. The role of the MYOC and OPTN genes and fourteen candidate loci was investigated in eight Finnish glaucoma families. Both candidate genes and loci were excluded in families, further confirming the heterogeneous nature of OAG. To investigate the genetic basis of glaucoma in a large Finnish family with juvenile and adult onset OAG, we analysed the MYOC gene in family members. Glaucoma associated mutation (Thr377Met) was identified in the MYOC gene segregating with the disease in the family. This finding has great significance for the family and encourages investigating the MYOC gene also in other Finnish OAG families. In order to identify the genetic susceptibility loci for XFS, we carried out a genome-wide scan in the extended Finnish XFS family. This scan produced promising candidate locus on chromosomal region 18q12.1-21.33 and several additional putative susceptibility loci for XFS. This locus on chromosome 18 provides a solid starting point for the fine-scale mapping studies, which are needed to identify variants conferring susceptibility to XFS in the region. A case-control and family-based association study and family-based linkage study was performed to evaluate whether SNPs in the LOXL1 gene contain a risk for XFS, XFG or POAG in the Finnish patients. A significant association between the LOXL1 gene SNPs and XFS and XFG was confirmed in the Finnish population. However, no association was detected with POAG. Probably also other genetic and environmental factors are involved in the pathogenesis of XFS and XFG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate determination of same-sex twin zygosity is important for medical, scientific and personal reasons. Determination may be based upon questionnaire data, blood group, enzyme isoforms and fetal membrane examination, but assignment of zygosity must ultimately be confirmed by genotypic data. Here methods are reviewed for calculating average probabilities of correctly concluding a twin pair is monozygotic, given they share the same genotypes across all loci for commonly utilized multiplex short tandem repeat (STR) kits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hip height, body condition, subcutaneous fat, eye muscle area, percentage Bos taurus, fetal age and diet digestibility data were collected at 17 372 assessments on 2181 Brahman and tropical composite (average 28% Brahman) female cattle aged between 0.5 and 7.5 years of age at five sites across Queensland. The study validated the subtraction of previously published estimates of gravid uterine weight to correct liveweight to the non-pregnant status. Hip height and liveweight were linearly related (Brahman: P<0.001, R-2 = 58%; tropical composite P<0.001, R-2 = 67%). Liveweight varied by 12-14% per body condition score (5-point scale) as cows differed from moderate condition (P<0.01). Parallel effects were also found due to subcutaneous rump fat depth and eye muscle area, which were highly correlated with each other and body condition score (r = 0.7-0.8). Liveweight differed from average by 1.65-1.66% per mm of rump fat depth and 0.71-0.76% per cm(2) of eye muscle area (P<0.01). Estimated dry matter digestibility of pasture consumed had no consistent effect in predicting liveweight and was therefore excluded from final models. A method developed to estimate full liveweight of post-weaning age female beef cattle from the other measures taken predicted liveweight to within 10 and 23% of that recorded for 65 and 95% of cases, respectively. For a 95% chance of predicted group average liveweight (body condition score used) being within 5, 4, 3, 2 and 1% of actual group average liveweight required 23, 36, 62, 137 and 521 females, respectively, if precision and accuracy of measurements matches that used in the research. Non-pregnant Bos taurus female cattle were calculated to be 10-40% heavier than Brahmans at the same hip height and body condition, indicating a substantial conformational difference. The liveweight prediction method was applied to a validation population of 83 unrelated groups of cattle weighed in extensive commercial situations on 119 days over 18 months (20 917 assessments). Liveweight prediction in the validation population exceeded average recorded liveweight for weigh groups by an average of 19 kg (similar to 6%) demonstrating the difficulty of achieving accurate and precise animal measurements under extensive commercial grazing conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sampling design is critical to the quality of quantitative research, yet it does not always receive appropriate attention in nursing research. The current article details how balancing probability techniques with practical considerations produced a representative sample of Australian nursing homes (NHs). Budgetary, logistical, and statistical constraints were managed by excluding some NHs (e.g., those too difficult to access) from the sampling frame; a stratified, random sampling methodology yielded a final sample of 53 NHs from a population of 2,774. In testing the adequacy of representation of the study population, chi-square tests for goodness of fit generated nonsignificant results for distribution by distance from major city and type of organization. A significant result for state/territory was expected and was easily corrected for by the application of weights. The current article provides recommendations for conducting high-quality, probability-based samples and stresses the importance of testing the representativeness of achieved samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for special education (SE) is increasing. The majority of those whose problems are due to neurodevelopmental disorders have no specific aetiology. The aim of this study was to evaluate the contribution of prenatal and perinatal factors and factors associated with growth and development to later need for full-time SE and to assess joint structural and volumetric brain alterations among subjects with unexplained, familial need for SE. A random sample of 900 subjects in full-time SE allocated into three levels of neurodevelopmental problems and 301 controls in mainstream education (ME) provided data on socioeconomic factors, pregnancy, delivery, growth, and development. Of those, 119 subjects belonging to a sibling-pair in full-time SE with unexplained aetiology and 43 controls in ME underwent brain magnetic resonance imaging (MRI). Analyses of structural brain alterations and midsagittal area and diameter measurements were made. Voxel-based morphometry (VBM) analysis provided detailed information on regional grey matter, white matter, and cerebrospinal fluid (CSF) volume differences. Father’s age ≥ 40 years, low birth weight, male sex, and lower socio-economic status all increased the probability of SE placement. At age 1 year, one standard deviation score decrease in height raised the probability of SE placement by 40% and in head circumference by 28%. At infancy, the gross motor milestones differentiated the children. From age 18 months, the fine motor milestones and those related to speech and social skills became more important. Brain MRI revealed no specific aetiology for subjects in SE. However, they had more often ≥ 3 abnormal findings in MRIs (thin corpus callosum and enlarged cerebral and cerebellar CSF spaces). In VBM, subjects in full-time SE had smaller global white matter, CSF, and total brain volumes than controls. Compared with controls, subjects with intellectual disabilities had regional volume alterations (greater grey matter volumes in the anterior cingulate cortex bilaterally, smaller grey matter volume in left thalamus and left cerebellar hemisphere, greater white matter volume in the left fronto-parietal region, and smaller white matter volumes bilaterally in the posterior limbs of the internal capsules). In conclusion, the epidemiological studies emphasized several factors that increased the probability of SE placement, useful as a framework for interventional studies. The global and regional brain MRI findings provide an interesting basis for future investigations of learning-related brain structures in young subjects with cognitive impairments or intellectual disabilities of unexplained, familial aetiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Polygenic risk scores comprising established susceptibility variants have shown to be informative classifiers for several complex diseases including prostate cancer. For prostate cancer it is unknown if inclusion of genetic markers that have so far not been associated with prostate cancer risk at a genome-wide significant level will improve disease prediction. METHODS We built polygenic risk scores in a large training set comprising over 25,000 individuals. Initially 65 established prostate cancer susceptibility variants were selected. After LD pruning additional variants were prioritized based on their association with prostate cancer. Six-fold cross validation was performed to assess genetic risk scores and optimize the number of additional variants to be included. The final model was evaluated in an independent study population including 1,370 cases and 1,239 controls. RESULTS The polygenic risk score with 65 established susceptibility variants provided an area under the curve (AUC) of 0.67. Adding an additional 68 novel variants significantly increased the AUC to 0.68 (P = 0.0012) and the net reclassification index with 0.21 (P = 8.5E-08). All novel variants were located in genomic regions established as associated with prostate cancer risk. CONCLUSIONS Inclusion of additional genetic variants from established prostate cancer susceptibility regions improves disease prediction. Prostate 75:1467–1474, 2015. © 2015 Wiley Periodicals, Inc.