967 resultados para Bayesian hypothesis testing
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Resolving the paradox of sex, with its twofold cost to genic transmission, remains one of the major unresolved questions in evolutionary biology. Counting this genetic cost has now gone genomic. In this issue of Molecular Ecology, Kraaijeveld et al. (2012) describe the first genome-scale comparative study of related sexual and asexual animal lineages, to test the hypothesis that asexuals bear heavier loads of deleterious transposable elements. A much higher density of such parasites might be expected, due to the inability of asexual lineages to purge transposons via mechanisms exclusive to sexual reproduction. They find that the answer is yes--and no--depending upon the family of transposons considered. Like many such advances in testing theory, more questions are raised by this study than answered, but a door has been opened to molecular evolutionary analyses of how responses to selection from intragenomic parasites might mediate the costs of sex.
Resumo:
Histological subtyping and grading by malignancy are the cornerstones of the World Health Organization (WHO) classification of tumors of the central nervous system. They shall provide clinicians with guidance as to the course of disease to be expected and the choices of treatment to be made. Nonetheless, patients with histologically identical tumors may have very different outcomes, notably in patients with astrocytic and oligodendroglial gliomas of WHO grades II and III. In gliomas of adulthood, 3 molecular markers have undergone extensive studies in recent years: 1p/19q chromosomal codeletion, O(6)-methylguanine methyltransferase (MGMT) promoter methylation, and mutations of isocitrate dehydrogenase (IDH) 1 and 2. However, the assessment of these molecular markers has so far not been implemented in clinical routine because of the lack of therapeutic implications. In fact, these markers were considered to be prognostic irrespective of whether patients were receiving radiotherapy (RT), chemotherapy, or both (1p/19q, IDH1/2), or of limited value because testing is too complex and no chemotherapy alternative to temozolomide was available (MGMT). In 2012, this situation has changed: long-term follow-up of the Radiation Therapy Oncology Group 9402 and European Organisation for Research and Treatment of Cancer 26951 trials demonstrated an overall survival benefit from the addition to RT of chemotherapy with procarbazine/CCNU/vincristine confined to patients with anaplastic oligodendroglial tumors with (vs without) 1p/19q codeletion. Furthermore, in elderly glioblastoma patients, the NOA-08 and the Nordic trial of RT alone versus temozolomide alone demonstrated a profound impact of MGMT promoter methylation on outcome by therapy and thus established MGMT as a predictive biomarker in this patient population. These recent results call for the routine implementation of 1p/19q and MGMT testing at least in subpopulations of malignant glioma patients and represent an encouraging step toward the development of personalized therapeutic approaches in neuro-oncology.
Resumo:
In this paper we report on the growth of thick films of magnetoresistive La2/3Sr1/3MnO3 by using spray and screen printing techniques on various substrates (Al2O3 and ZrO2). The growth conditions are explored in order to optimize the microstructure of the films. The films display a room-temperature magnetoresistance of 0.0012%/Oe in the 1 kOe field region. A magnetic sensor is described and tested.
Resumo:
OBJECTIVES: To obtain information about the prevalence of, reasons for, and adequacy of HIV testing in the general population in Switzerland in 1992. DESIGN: Telephone survey (n = 2800). RESULTS: Some 47% of the sample underwent one HIV test performed through blood donation (24%), voluntary testing (17%) or both (6%). Of the sample, 46% considered themselves well or very well informed about the HIV test. Patients reported unsystematic pre-test screening by doctors for the main HIV risks. People having been in situations of potential exposure to risk were more likely to have had the test than others. Overall, 85% of those HIV-tested had a relevant, generally risk-related reason for having it performed. CONCLUSIONS: HIV testing is widespread in Switzerland. Testing is mostly performed for relevant reasons. Pre-test counselling is poor and an opportunity for prevention is thus lost.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
peroxisome proliferator-activated receptors (PPARs) are nuclear receptors acting as lipid sensors. Besides its metabolic activity in peripheral organs, the PPAR beta/delta isotype is highly expressed in the brain and its deletion in mice induces a brain developmental defect. Nevertheless, exploration of PPARbeta action in the central nervous system remains sketchy. The lipid content alteration observed in PPARbeta null brains and the positive action of PPARbeta agonists on oligodendrocyte differentiation, a process characterized by lipid accumulation, suggest that PPARbeta acts on the fatty acids and/or cholesterol metabolisms in the brain. PPARbeta could also regulate central inflammation and antioxidant mechanisms in the damaged brain. Even if not fully understood, the neuroprotective effect of PPARbeta agonists highlights their potential benefit to treat various acute or chronic neurological disorders. In this perspective, we need to better understand the basic function of PPARbeta in the brain. This review proposes different leads for future researches.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
This research evaluated the concrete strength of two mixes which were used in the Polk County project NHS-500-1(3)--10-77 and were developed to meet a contract requirement of 900 psi third-point 28-day flexural strength. Two concrete mixes, the Proposed Mix and the Enhanced Mix, were tested for strength. Based on the experimental results, it was found that the addition of 50 lb of cementitious materials did not significantly increase concrete strength. The requirement of 900 psi 28-day third-point flexural strength (MOR-TPL) was not achieved by this amount of addition of cementitious materials.
Resumo:
The objective of this research project was to service load test a representative sample of old reinforced concrete bridges (some of them historic and some of them scheduled for demolition) with the results being used to create a database so the performance of similar bridges could be predicted. The types of bridges tested included two reinforced concrete open spandrel arches, two reinforced concrete filled spandrel arches, one reinforced concrete slab bridge, and one two span reinforced concrete stringer bridge. The testing of each bridge consisted of applying a static load at various locations on the bridges and monitoring strains and deflections in critical members. The load was applied by means of a tandem axle dump truck with varying magnitudes of load. At each load increment, the truck was stopped at predetermined transverse and longitudinal locations and strain and deflection data were obtained. The strain data obtained were then evaluated in relation to the strain values predicted by traditional analytical procedures and a carrying capacity of the bridges was determined based on the experimental data. The response of a majority of the bridges tested was considerably lower than that predicted by analysis. Thus, the safe load carrying capacities of the bridges were greater than those predicted by the analytical models, and in a few cases, the load carrying capacities were found to be three or four times greater than calculated values. However, the test results of one bridge were lower than those predicted by analysis and thus resulted in the analytical rating being reduced. The results of the testing verified that traditional analytical methods, in most instances, are conservative and that the safe load carrying capacities of a majority of the reinforced concrete bridges are considerably greater than what one would determine on the basis of analytical analysis alone. In extrapolating the results obtained from diagnostic load tests to levels greater than those placed on the bridge during the load test, care must be taken to ensure safe bridge performance at the higher load levels. To extrapolate the load test results from the bridges tested in this investigation, the method developed by Lichtenstein in NCHRP Project 12-28(13)A was used.
Resumo:
The characterization and categorization of coarse aggregates for use in portland cement concrete (PCC) pavements is a highly refined process at the Iowa Department of Transportation. Over the past 10 to 15 years, much effort has been directed at pursuing direct testing schemes to supplement or replace existing physical testing schemes. Direct testing refers to the process of directly measuring the chemical and mineralogical properties of an aggregate and then attempting to correlate those measured properties to historical performance information (i.e., field service record). This is in contrast to indirect measurement techniques, which generally attempt to extrapolate the performance of laboratory test specimens to expected field performance. The purpose of this research project was to investigate and refine the use of direct testing methods, such as X-ray analysis techniques and thermal analysis techniques, to categorize carbonate aggregates for use in portland cement concrete. The results of this study indicated that the general testing methods that are currently used to obtain data for estimating service life tend to be very reliable and have good to excellent repeatability. Several changes in the current techniques were recommended to enhance the long-term reliability of the carbonate database. These changes can be summarized as follows: (a) Limits that are more stringent need to be set on the maximum particle size in the samples subjected to testing. This should help to improve the reliability of all three of the test methods studied during this project. (b) X-ray diffraction testing needs to be refined to incorporate the use of an internal standard. This will help to minimize the influence of sample positioning errors and it will also allow for the calculation of the concentration of the various minerals present in the samples. (c) Thermal analysis data needs to be corrected for moisture content and clay content prior to calculating the carbonate content of the sample.