515 resultados para Bayesian Modeling Averaging
Bayesian networks as a complex system tool in the context of a major industry and university project
Resumo:
The Source Monitoring Framework is a promising model of constructive memory, yet fails because it is connectionist and does not allow content tagging. The Dual-Process Signal Detection Model is an improvement because it reduces mnemic qualia to a single memory signal (or degree of belief), but still commits itself to non-discrete representation. By supposing that ‘tagging’ means the assignment of propositional attitudes to aggregates of anemic characteristics informed inductively, then a discrete model becomes plausible. A Bayesian model of source monitoring accounts for the continuous variation of inputs and assignment of prior probabilities to memory content. A modified version of the High-Threshold Dual-Process model is recommended to further source monitoring research.
Resumo:
What type of probability theory best describes the way humans make judgments under uncertainty and decisions under conflict? Although rational models of cognition have become prominent and have achieved much success, they adhere to the laws of classical probability theory despite the fact that human reasoning does not always conform to these laws. For this reason we have seen the recent emergence of models based on an alternative probabilistic framework drawn from quantum theory. These quantum models show promise in addressing cognitive phenomena that have proven recalcitrant to modeling by means of classical probability theory. This review compares and contrasts probabilistic models based on Bayesian or classical versus quantum principles, and highlights the advantages and disadvantages of each approach.
Resumo:
Over the past several years, evidence has accumulated showing that the cerebellum plays a significant role in cognitive function. Here we show, in a large genetically informative twin sample (n= 430; aged 16-30. years), that the cerebellum is strongly, and reliably (n=30 rescans), activated during an n-back working memory task, particularly lobules I-IV, VIIa Crus I and II, IX and the vermis. Monozygotic twin correlations for cerebellar activation were generally much larger than dizygotic twin correlations, consistent with genetic influences. Structural equation models showed that up to 65% of the variance in cerebellar activation during working memory is genetic (averaging 34% across significant voxels), most prominently in the lobules VI, and VIIa Crus I, with the remaining variance explained by unique/unshared environmental factors. Heritability estimates for brain activation in the cerebellum agree with those found for working memory activation in the cerebral cortex, even though cerebellar cyto-architecture differs substantially. Phenotypic correlations between BOLD percent signal change in cerebrum and cerebellum were low, and bivariate modeling indicated that genetic influences on the cerebellum are at least partly specific to the cerebellum. Activation on the voxel-level correlated very weakly with cerebellar gray matter volume, suggesting specific genetic influences on the BOLD signal. Heritable signals identified here should facilitate discovery of genetic polymorphisms influencing cerebellar function through genome-wide association studies, to elucidate the genetic liability to brain disorders affecting the cerebellum.
Resumo:
Despite substantial progress in measuring the 3D profile of anatomical variations in the human brain, their genetic and environmental causes remain enigmatic. We developed an automated system to identify and map genetic and environmental effects on brain structure in large brain MRI databases . We applied our multi-template segmentation approach ("Multi-Atlas Fluid Image Alignment") to fluidly propagate hand-labeled parameterized surface meshes into 116 scans of twins (60 identical, 56 fraternal), labeling the lateral ventricles. Mesh surfaces were averaged within subjects to minimize segmentation error. We fitted quantitative genetic models at each of 30,000 surface points to measure the proportion of shape variance attributable to (1) genetic differences among subjects, (2) environmental influences unique to each individual, and (3) shared environmental effects. Surface-based statistical maps revealed 3D heritability patterns, and their significance, with and without adjustments for global brain scale. These maps visualized detailed profiles of environmental versus genetic influences on the brain, extending genetic models to spatially detailed, automatically computed, 3D maps.
Resumo:
This work describes the development of a model of cerebral atrophic changes associated with the progression of Alzheimer's disease (AD). Linear registration, region-of-interest analysis, and voxel-based morphometry methods have all been employed to elucidate the changes observed at discrete intervals during a disease process. In addition to describing the nature of the changes, modeling disease-related changes via deformations can also provide information on temporal characteristics. In order to continuously model changes associated with AD, deformation maps from 21 patients were averaged across a novel z-score disease progression dimension based on Mini Mental State Examination (MMSE) scores. The resulting deformation maps are presented via three metrics: local volume loss (atrophy), volume (CSF) increase, and translation (interpreted as representing collapse of cortical structures). Inspection of the maps revealed significant perturbations in the deformation fields corresponding to the entorhinal cortex (EC) and hippocampus, orbitofrontal and parietal cortex, and regions surrounding the sulci and ventricular spaces, with earlier changes predominantly lateralized to the left hemisphere. These changes are consistent with results from post-mortem studies of AD.
Resumo:
The hemodynamic response function (HRF) describes the local response of brain vasculature to functional activation. Accurate HRF modeling enables the investigation of cerebral blood flow regulation and improves our ability to interpret fMRI results. Block designs have been used extensively as fMRI paradigms because detection power is maximized; however, block designs are not optimal for HRF parameter estimation. Here we assessed the utility of block design fMRI data for HRF modeling. The trueness (relative deviation), precision (relative uncertainty), and identifiability (goodness-of-fit) of different HRF models were examined and test-retest reproducibility of HRF parameter estimates was assessed using computer simulations and fMRI data from 82 healthy young adult twins acquired on two occasions 3 to 4 months apart. The effects of systematically varying attributes of the block design paradigm were also examined. In our comparison of five HRF models, the model comprising the sum of two gamma functions with six free parameters had greatest parameter accuracy and identifiability. Hemodynamic response function height and time to peak were highly reproducible between studies and width was moderately reproducible but the reproducibility of onset time was low. This study established the feasibility and test-retest reliability of estimating HRF parameters using data from block design fMRI studies.
Resumo:
Population-based brain mapping provides great insight into the trajectory of aging and dementia, as well as brain changes that normally occur over the human life span.We describe three novel brain mapping techniques, cortical thickness mapping, tensor-based morphometry (TBM), and hippocampal surface modeling, which offer enormous power for measuring disease progression in drug trials, and shed light on the neuroscience of brain degeneration in Alzheimer's disease (AD) and mild cognitive impairment (MCI).We report the first time-lapse maps of cortical atrophy spreading dynamically in the living brain, based on averaging data from populations of subjects with Alzheimer's disease and normal subjects imaged longitudinally with MRI. These dynamic sequences show a rapidly advancing wave of cortical atrophy sweeping from limbic and temporal cortices into higher-order association and ultimately primary sensorimotor areas, in a pattern that correlates with cognitive decline. A complementary technique, TBM, reveals the 3D profile of atrophic rates, at each point in the brain. A third technique, hippocampal surface modeling, plots the profile of shape alterations across the hippocampal surface. The three techniques provide moderate to highly automated analyses of images, have been validated on hundreds of scans, and are sensitive to clinically relevant changes in individual patients and groups undergoing different drug treatments. We compare time-lapse maps of AD, MCI, and other dementias, correlate these changes with cognition, and relate them to similar time-lapse maps of childhood development, schizophrenia, and HIV-associated brain degeneration. Strengths and weaknesses of these different imaging measures for basic neuroscience and drug trials are discussed.
Resumo:
Broad knowledge is required when a business process is modeled by a business analyst. We argue that existing Business Process Management methodologies do not consider business goals at the appropriate level. In this paper we present an approach to integrate business goals and business process models. We design a Business Goal Ontology for modeling business goals. Furthermore, we devise a modeling pattern for linking the goals to process models and show how the ontology can be used in query answering. In this way, we integrate the intentional perspective into our business process ontology framework, enriching the process description and enabling new types of business process analysis. © 2008 IEEE.
Resumo:
The total entropy utility function is considered for the dual purpose of Bayesian design for model discrimination and parameter estimation. A sequential design setting is proposed where it is shown how to efficiently estimate the total entropy utility for a wide variety of data types. Utility estimation relies on forming particle approximations to a number of intractable integrals which is afforded by the use of the sequential Monte Carlo algorithm for Bayesian inference. A number of motivating examples are considered for demonstrating the performance of total entropy in comparison to utilities for model discrimination and parameter estimation. The results suggest that the total entropy utility selects designs which are efficient under both experimental goals with little compromise in achieving either goal. As such, the total entropy utility is advocated as a general utility for Bayesian design in the presence of model uncertainty.
Resumo:
In this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. After a pilot run of ABC, the likelihood-free parametric bootstrap approach requires very few model simulations to produce an approximate posterior, which can be a useful approximation in its own right. An alternative is to use this approximation as a proposal distribution in ABC algorithms to make them more efficient. In this paper, the parametric bootstrap approximation is used to form the initial importance distribution for the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. The new approach is illustrated through a simulation study of the univariate g-and- k quantile distribution, and is used to infer parameter values of a stochastic model describing expanding melanoma cell colonies.
Resumo:
The inverse temperature hyperparameter of the hidden Potts model governs the strength of spatial cohesion and therefore has a substantial influence over the resulting model fit. The difficulty arises from the dependence of an intractable normalising constant on the value of the inverse temperature, thus there is no closed form solution for sampling from the distribution directly. We review three computational approaches for addressing this issue, namely pseudolikelihood, path sampling, and the approximate exchange algorithm. We compare the accuracy and scalability of these methods using a simulation study.
Resumo:
To further investigate susceptibility loci identified by genome-wide association studies, we genotyped 5,500 SNPs across 14 associated regions in 8,000 samples from a control group and 3 diseases: type 2 diabetes (T2D), coronary artery disease (CAD) and Graves' disease. We defined, using Bayes theorem, credible sets of SNPs that were 95% likely, based on posterior probability, to contain the causal disease-associated SNPs. In 3 of the 14 regions, TCF7L2 (T2D), CTLA4 (Graves' disease) and CDKN2A-CDKN2B (T2D), much of the posterior probability rested on a single SNP, and, in 4 other regions (CDKN2A-CDKN2B (CAD) and CDKAL1, FTO and HHEX (T2D)), the 95% sets were small, thereby excluding most SNPs as potentially causal. Very few SNPs in our credible sets had annotated functions, illustrating the limitations in understanding the mechanisms underlying susceptibility to common diseases. Our results also show the value of more detailed mapping to target sequences for functional studies. © 2012 Nature America, Inc. All rights reserved.
Resumo:
This paper proposes an analytical Incident Traffic Management framework for freeway incident modeling and traffic re-routing. The proposed framework incorporates an econometric incident duration model and a traffic re-routing optimization module. The incident duration model is used to estimate the expected duration of the incident and thus determine the planning horizon for the re-routing module. The re-routing module is a CTM-based Single Destination System Optimal Dynamic Traffic Assignment model that generates optimal real-time strategies of re-routing freeway traffic to its adjacent arterial network during incidents. The proposed framework has been applied to a case study network including a freeway and its adjacent arterial network in South East Queensland, Australia. The results from different scenarios of freeway demand and incident blockage extent have been analyzed and advantages of the proposed framework are demonstrated.
Resumo:
The major diabetes autoantigen, glutamic acid decarboxylase (GAD65), contains a region of sequence similarity, including six identical residues PEVKEK, to the P2C protein of coxsackie B virus, suggesting that cross-reactivity between coxsackie B virus and GAD65 can initiate autoimmune diabetes. We used the human islet cell mAbs MICA3 and MICA4 to identify the Ab epitopes of GAD65 by screening phage-displayed random peptide libraries. The identified peptide sequences could be mapped to a homology model of the pyridoxal phosphate (PLP) binding domain of GAD65. For MICA3, a surface loop containing the sequence PEVKEK and two adjacent exposed helixes were identified in the PLP binding domain as well as a region of the C terminus of GAD65 that has previously been identified as critical for MICA3 binding. To confirm that the loop containing tile PEVKEK sequence contributes to the MICA3 epitope, this loop was deleted by mutagenesis. This reduced binding of MICA3 by 70%. Peptide sequences selected using MICA4 were rich in basic or hydroxyl-containing amino acids, and the surface of the GAD65 PLP-binding domain surrounding Lys358, which is known to be critical for MICA4 binding, was likewise rich in these amino acids. Also, the two phage most reactive width MICA4 encoded the motif VALxG, and the reverse of this sequence, LAV, was located in this same region. Thus, we have defined the MICA3 and MICA4 epitopes on GAD65 using the combination of phage display, molecular modeling, and mutagenesis and have provided compelling evidence for the involvement of the PEVKEK loop in the MICA3 epitope.