63 resultados para Bayesian p-values


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Although serum ECP concentrations have been reported in normal children, there are currently no published upper cutoff reference limits for serum ECP in normal, nonatopic, nonasthmatic children aged 1-15 years.
METHODS: We recruited 123 nonatopic, nonasthmatic normal children attending the Royal Belfast Hospital for Sick Children for elective surgery and measured serum ECP concentrations. The effects of age and exposure to environmental tobacco smoke (ETS) on the upper reference limits were studied by multiple regression and fractional polynomials.
RESULTS: The median serum ECP concentration was 6.5 microg/l and the 95th and 97.5 th percentiles were 18.8 and 19.9 microg/l. The median and 95th percentile did not vary with age. Exposure to ETS was not associated with altered serum ECP concentrations (P = 0.14).
CONCLUSIONS: The 95th and 97.5 th percentiles for serum ECP for normal, nonatopic, nonasthmatic children (aged 1-15 years) were 19 and 20 microg/l, respectively. Age and exposure to parental ETS did not significantly alter serum ECP concentrations or the normal upper reference limits. Our data provide cutoff upper reference limits for normal children for use of serum ECP in a clinical or research setting.
PMID: 10604557 [PubMed - indexed for MEDLINE]

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE Stable isotope values (d13C and d15N) of darted skin and blubber biopsies can shed light on habitat use and diet of cetaceans, which are otherwise difficult to study. Non-dietary factors affect isotopic variability, chiefly the depletion of C due to the presence of C-rich lipids. The efficacy of post hoc lipid-correction models (normalization) must be tested. METHODS For tissues with high natural lipid content (e.g., whale skin and blubber), chemical lipid extraction or normalization is necessary. C:N ratios, d13C values and d15N values were determined for duplicate control and lipid-extracted skin and blubber of fin (Balaenoptera physalus), humpback (Megaptera novaeangliae) and minke whales (B. acutorostrata) by continuous-flow elemental analysis isotope ratio mass spectrometry (CF-EA-IRMS). Six different normalization models were tested to correct d13C values for the presence of lipids. RESULTS Following lipid extraction, significant increases in d13C values were observed for both tissues in the three species. Significant increases were also found for d15N values in minke whale skin and fin whale blubber. In fin whale skin, the d15N values decreased, with no change observed in humpback whale skin. Non-linear models generally out-performed linear models and the suitability of models varied by species and tissue, indicating the need for high model specificity, even among these closely related taxa. CONCLUSIONS Given the poor predictive power of the models to estimate lipid-free d13C values, and the unpredictable changes in d N values due to lipid-extraction, we recommend against arithmetical normalization in accounting for lipid effects on d13C values for balaenopterid skin or blubber samples. Rather, we recommend that duplicate analysis of lipid-extracted (d13C values) and non-treated tissues (d15N values) be used. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim-To develop an expert system model for the diagnosis of fine needle aspiration cytology (FNAC) of the breast.

Methods-Knowledge and uncertainty were represented in the form of a Bayesian belief network which permitted the combination of diagnostic evidence in a cumulative manner and provided a final probability for the possible diagnostic outcomes. The network comprised 10 cytological features (evidence nodes), each independently linked to the diagnosis (decision node) by a conditional probability matrix. The system was designed to be interactive in that the cytopathologist entered evidence into the network in the form of likelihood ratios for the outcomes at each evidence node.

Results-The efficiency of the network was tested on a series of 40 breast FNAC specimens. The highest diagnostic probability provided by the network agreed with the cytopathologists' diagnosis in 100% of cases for the assessment of discrete, benign, and malignant aspirates. A typical probably benign cases were given probabilities in favour of a benign diagnosis. Suspicious cases tended to have similar probabilities for both diagnostic outcomes and so, correctly, could not be assigned as benign or malignant. A closer examination of cumulative belief graphs for the diagnostic sequence of each case provided insight into the diagnostic process, and quantitative data which improved the identification of suspicious cases.

Conclusion-The further development of such a system will have three important roles in breast cytodiagnosis: (1) to aid the cytologist in making a more consistent and objective diagnosis; (2) to provide a teaching tool on breast cytological diagnosis for the non-expert; and (3) it is the first stage in the development of a system capable of automated diagnosis through the use of expert system machine vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present ultraviolet, optical, near-infrared photometry and spectroscopy of SN 2009N in NGC 4487. This object is a Type II-P supernova with spectra resembling those of subluminous II-P supernovae, while its bolometric luminosity is similar to that of the intermediate-luminosity SN 2008in. We created SYNOW models of the plateau phase spectra for line identification and to measure the expansion velocity. In the near-infrared spectra we find signs indicating possible weak interaction between the supernova ejecta and the pre-existing circumstellar material. These signs are also present in the previously unpublished near-infrared spectra of SN 2008in. The distance to SN 2009N is determined via the expanding photosphere method and the standard candle method as D = 21.6 ± 1.1 Mpc. The produced nickel-mass is estimated to be ∼0.020 ± 0.004 M⊙. We infer the physical properties of the progenitor at the explosion through hydrodynamical modelling of the observables. We find the values of the total energy as ∼0.48 × 1051 erg, the ejected mass as ∼11.5 M⊙, and the initial radius as ∼287 R⊙.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents two new score functions based on the Bayesian Dirichlet equivalent uniform (BDeu) score for learning Bayesian network structures. They consider the sensitivity of BDeu to varying parameters of the Dirichlet prior. The scores take on the most adversary and the most beneficial priors among those within a contamination set around the symmetric one. We build these scores in such way that they are decomposable and can be computed efficiently. Because of that, they can be integrated into any state-of-the-art structure learning method that explores the space of directed acyclic graphs and allows decomposable scores. Empirical results suggest that our scores outperform the standard BDeu score in terms of the likelihood of unseen data and in terms of edge discovery with respect to the true network, at least when the training sample size is small. We discuss the relation between these new scores and the accuracy of inferred models. Moreover, our new criteria can be used to identify the amount of data after which learning is saturated, that is, additional data are of little help to improve the resulting model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Credal networks are graph-based statistical models whose parameters take values on a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The result of inferences with such models depends on the irrelevance/independence concept adopted. In this paper, we study the computational complexity of inferences under the concepts of epistemic irrelevance and strong independence. We strengthen complexity results by showing that inferences with strong independence are NP-hard even in credal trees with ternary variables, which indicates that tractable algorithms, including the existing one for epistemic trees, cannot be used for strong independence. We prove that the polynomial time of inferences in credal trees under epistemic irrelevance is not likely to extend to more general models, because the problem becomes NP-hard even in simple polytrees. These results draw a definite line between networks with efficient inferences and those where inferences are hard, and close several open questions regarding the computational complexity of such models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of learning Bayesian network structures from data based on score functions that are decomposable. It describes properties that strongly reduce the time and memory costs of many known methods without losing global optimality guarantees. These properties are derived for different score criteria such as Minimum Description Length (or Bayesian Information Criterion), Akaike Information Criterion and Bayesian Dirichlet Criterion. Then a branch-and-bound algorithm is presented that integrates structural constraints with data in a way to guarantee global optimality. As an example, structural constraints are used to map the problem of structure learning in Dynamic Bayesian networks into a corresponding augmented Bayesian network. Finally, we show empirically the benefits of using the properties with state-of-the-art methods and with the new algorithm, which is able to handle larger data sets than before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Retrospective clinical datasets are often characterized by a relatively small sample size and many missing data. In this case, a common way for handling the missingness consists in discarding from the analysis patients with missing covariates, further reducing the sample size. Alternatively, if the mechanism that generated the missing allows, incomplete data can be imputed on the basis of the observed data, avoiding the reduction of the sample size and allowing methods to deal with complete data later on. Moreover, methodologies for data imputation might depend on the particular purpose and might achieve better results by considering specific characteristics of the domain. The problem of missing data treatment is studied in the context of survival tree analysis for the estimation of a prognostic patient stratification. Survival tree methods usually address this problem by using surrogate splits, that is, splitting rules that use other variables yielding similar results to the original ones. Instead, our methodology consists in modeling the dependencies among the clinical variables with a Bayesian network, which is then used to perform data imputation, thus allowing the survival tree to be applied on the completed dataset. The Bayesian network is directly learned from the incomplete data using a structural expectation–maximization (EM) procedure in which the maximization step is performed with an exact anytime method, so that the only source of approximation is due to the EM formulation itself. On both simulated and real data, our proposed methodology usually outperformed several existing methods for data imputation and the imputation so obtained improved the stratification estimated by the survival tree (especially with respect to using surrogate splits).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents new results for the (partial) maximum a posteriori (MAP) problem in Bayesian networks, which is the problem of querying the most probable state configuration of some of the network variables given evidence. It is demonstrated that the problem remains hard even in networks with very simple topology, such as binary polytrees and simple trees (including the Naive Bayes structure), which extends previous complexity results. Furthermore, a Fully Polynomial Time Approximation Scheme for MAP in networks with bounded treewidth and bounded number of states per variable is developed. Approximation schemes were thought to be impossible, but here it is shown otherwise under the assumptions just mentioned, which are adopted in most applications.