933 resultados para Bayesian hierarchical linear model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liver tissue was collected from eight random dairy cows at a slaughterhouse to test if gene expression of pyruvate carboxylase (PC), mitochondrial phosphoenolpyruvate carboxykinase (PEPCKm) and cytosolic phosphoenolpyruvate carboxykinase (PEPCKc) is different at different locations in the liver. Obtained liver samples were analysed for mRNA expression levels of PC, PEPCKc and PEPCKm and subjected to the MIXED procedure of SAS to test for the sampled locations with cow liver as repeated subject. Additionally, the general linear model procedure (GLM) for analysis of variance was applied to test for significant differences for mRNA abundance of PEPCKm, PEPCKc and bPC between the livers. In conclusion, this study demonstrated that mRNA abundance of PC, PEPCKc and PEPCKm is not different between locations in the liver but may differ between individual cows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Levels of differentiation among populations depend both on demographic and selective factors: genetic drift and local adaptation increase population differentiation, which is eroded by gene flow and balancing selection. We describe here the genomic distribution and the properties of genomic regions with unusually high and low levels of population differentiation in humans to assess the influence of selective and neutral processes on human genetic structure. Methods Individual SNPs of the Human Genome Diversity Panel (HGDP) showing significantly high or low levels of population differentiation were detected under a hierarchical-island model (HIM). A Hidden Markov Model allowed us to detect genomic regions or islands of high or low population differentiation. Results Under the HIM, only 1.5% of all SNPs are significant at the 1% level, but their genomic spatial distribution is significantly non-random. We find evidence that local adaptation shaped high-differentiation islands, as they are enriched for non-synonymous SNPs and overlap with previously identified candidate regions for positive selection. Moreover there is a negative relationship between the size of islands and recombination rate, which is stronger for islands overlapping with genes. Gene ontology analysis supports the role of diet as a major selective pressure in those highly differentiated islands. Low-differentiation islands are also enriched for non-synonymous SNPs, and contain an overly high proportion of genes belonging to the 'Oncogenesis' biological process. Conclusions Even though selection seems to be acting in shaping islands of high population differentiation, neutral demographic processes might have promoted the appearance of some genomic islands since i) as much as 20% of islands are in non-genic regions ii) these non-genic islands are on average two times shorter than genic islands, suggesting a more rapid erosion by recombination, and iii) most loci are strongly differentiated between Africans and non-Africans, a result consistent with known human demographic history.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: First investigations of the interactions between weather and the incidence of acute myocardial infarctions date back to 1938. The early observation of a higher incidence of myocardial infarctions in the cold season could be confirmed in very different geographical regions and cohorts. While the influence of seasonal variations on the incidence of myocardial infarctions has been extensively documented, the impact of individual meteorological parameters on the disease has so far not been investigated systematically. Hence the present study intended to assess the impact of the essential variables of weather and climate on the incidence of myocardial infarctions. METHODS: The daily incidence of myocardial infarctions was calculated from a national hospitalization survey. The hourly weather and climate data were provided by the database of the national weather forecast. The epidemiological and meteorological data were correlated by multivariate analysis based on a generalized linear model assuming a log-link-function and a Poisson distribution. RESULTS: High ambient pressure, high pressure gradients, and heavy wind activity were associated with an increase in the incidence of the totally 6560 hospitalizations for myocardial infarction irrespective of the geographical region. Snow- and rainfall had inconsistent effects. Temperature, Foehn, and lightning showed no statistically significant impact. CONCLUSIONS: Ambient pressure, pressure gradient, and wind activity had a statistical impact on the incidence of myocardial infarctions in Switzerland from 1990 to 1994. To establish a cause-and-effect relationship more data are needed on the interaction between the pathophysiological mechanisms of the acute coronary syndrome and weather and climate variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Searching for the neural correlates of visuospatial processing using functional magnetic resonance imaging (fMRI) is usually done in an event-related framework of cognitive subtraction, applying a paradigm comprising visuospatial cognitive components and a corresponding control task. Besides methodological caveats of the cognitive subtraction approach, the standard general linear model with fixed hemodynamic response predictors bears the risk of being underspecified. It does not take into account the variability of the blood oxygen level-dependent signal response due to variable task demand and performance on the level of each single trial. This underspecification may result in reduced sensitivity regarding the identification of task-related brain regions. In a rapid event-related fMRI study, we used an extended general linear model including single-trial reaction-time-dependent hemodynamic response predictors for the analysis of an angle discrimination task. In addition to the already known regions in superior and inferior parietal lobule, mapping the reaction-time-dependent hemodynamic response predictor revealed a more specific network including task demand-dependent regions not being detectable using the cognitive subtraction method, such as bilateral caudate nucleus and insula, right inferior frontal gyrus and left precentral gyrus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene functions but they also present challenge of analyzing data with large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. In this paper, we address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression based on a previous approach, Iteratively ReWeighted Partial Least Squares, i.e. IRWPLS (Marx, 1996). We compare our results with two-stage PLS (Nguyen and Rocke, 2002A; Nguyen and Rocke, 2002B) and other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying bias correction to the likelihood to avoid (quasi)separation, we often get lower classification error rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a numerically simple routine for locally adaptive smoothing. The locally heterogeneous regression function is modelled as a penalized spline with a smoothly varying smoothing parameter modelled as another penalized spline. This is being formulated as hierarchical mixed model, with spline coe±cients following a normal distribution, which by itself has a smooth structure over the variances. The modelling exercise is in line with Baladandayuthapani, Mallick & Carroll (2005) or Crainiceanu, Ruppert & Carroll (2006). But in contrast to these papers Laplace's method is used for estimation based on the marginal likelihood. This is numerically simple and fast and provides satisfactory results quickly. We also extend the idea to spatial smoothing and smoothing in the presence of non normal response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mild cognitive impairment (MCI) often refers to the preclinical stage of dementia, where the majority develop Alzheimer's disease (AD). Given that neurodegenerative burden and compensatory mechanisms might exist before accepted clinical symptoms of AD are noticeable, the current prospective study aimed to investigate the functioning of brain regions in the visuospatial networks responsible for preclinical symptoms in AD using event-related functional magnetic resonance imaging (fMRI). Eighteen MCI patients were evaluated and clinically followed for approximately 3 years. Five progressed to AD (PMCI) and eight remained stable (SMCI). Thirteen age-, gender- and education-matched controls also participated. An angle discrimination task with varying task demands was used. Brain activation patterns as well as task demand-dependent and -independent signal changes between the groups were investigated by using an extended general linear model including individual performance (reaction time [RT]) of each single trial. Similar behavioral (RT and accuracy) responses were observed between MCI patients and controls. A network of bilateral activations, e.g. dorsal pathway, which increased linearly with increasing task demand, was engaged in all subjects. Compared with SMCI patients and controls, PMCI patients showed a stronger relation between task demand and brain activity in left superior parietal lobules (SPL) as well as a general task demand-independent increased activation in left precuneus. Altered brain function can be detected at a group level in individuals that progress to AD before changes occur at the behavioral level. Increased parietal activation in PMCI could reflect a reduced neuronal efficacy due to accumulating AD pathology and might predict future clinical decline in patients with MCI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural correlates of electroencephalographic (EEG) alpha rhythm are poorly understood. Here, we related EEG alpha rhythm in awake humans to blood-oxygen-level-dependent (BOLD) signal change determined by functional magnetic resonance imaging (fMRI). Topographical EEG was recorded simultaneously with fMRI during an open versus closed eyes and an auditory stimulation versus silence condition. EEG was separated into spatial components of maximal temporal independence using independent component analysis. Alpha component amplitudes and stimulus conditions served as general linear model regressors of the fMRI signal time course. In both paradigms, EEG alpha component amplitudes were associated with BOLD signal decreases in occipital areas, but not in thalamus, when a standard BOLD response curve (maximum effect at approximately 6 s) was assumed. The part of the alpha regressor independent of the protocol condition, however, revealed significant positive thalamic and mesencephalic correlations with a mean time delay of approximately 2.5 s between EEG and BOLD signals. The inverse relationship between EEG alpha amplitude and BOLD signals in primary and secondary visual areas suggests that widespread thalamocortical synchronization is associated with decreased brain metabolism. While the temporal relationship of this association is consistent with metabolic changes occurring simultaneously with changes in the alpha rhythm, sites in the medial thalamus and in the anterior midbrain were found to correlate with short time lag. Assuming a canonical hemodynamic response function, this finding is indicative of activity preceding the actual EEG change by some seconds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The goal of this study was to determine whether site-specific differences in the subgingival microbiota could be detected by the checkerboard method in subjects with periodontitis. Methods: Subjects with at least six periodontal pockets with a probing depth (PD) between 5 and 7 mm were enrolled in the study. Subgingival plaque samples were collected with sterile curets by a single-stroke procedure at six selected periodontal sites from 161 subjects (966 subgingival sites). Subgingival bacterial samples were assayed with the checkerboard DNA-DNA hybridization method identifying 37 species. Results: Probing depths of 5, 6, and 7 mm were found at 50% (n = 483), 34% (n = 328), and 16% (n = 155) of sites, respectively. Statistical analysis failed to demonstrate differences in the sum of bacterial counts by tooth type (P = 0.18) or specific location of the sample (P = 0.78). With the exceptions of Campylobacter gracilis (P <0.001) and Actinomyces naeslundii (P <0.001), analysis by general linear model multivariate regression failed to identify subject or sample location factors as explanatory to microbiologic results. A trend of difference in bacterial load by tooth type was found for Prevotella nigrescens (P <0.01). At a cutoff level of >/=1.0 x 10(5), Porphyromonas gingivalis and Tannerella forsythia (previously T. forsythensis) were present at 48.0% to 56.3% and 46.0% to 51.2% of sampled sites, respectively. Conclusions: Given the similarities in the clinical evidence of periodontitis, the presence and levels of 37 species commonly studied in periodontitis are similar, with no differences between molar, premolar, and incisor/cuspid subgingival sites. This may facilitate microbiologic sampling strategies in subjects during periodontal therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The flammability zone boundaries are very important properties to prevent explosions in the process industries. Within the boundaries, a flame or explosion can occur so it is important to understand these boundaries to prevent fires and explosions. Very little work has been reported in the literature to model the flammability zone boundaries. Two boundaries are defined and studied: the upper flammability zone boundary and the lower flammability zone boundary. Three methods are presented to predict the upper and lower flammability zone boundaries: The linear model The extended linear model, and An empirical model The linear model is a thermodynamic model that uses the upper flammability limit (UFL) and lower flammability limit (LFL) to calculate two adiabatic flame temperatures. When the proper assumptions are applied, the linear model can be reduced to the well-known equation yLOC = zyLFL for estimation of the limiting oxygen concentration. The extended linear model attempts to account for the changes in the reactions along the UFL boundary. Finally, the empirical method fits the boundaries with linear equations between the UFL or LFL and the intercept with the oxygen axis. xx Comparison of the models to experimental data of the flammability zone shows that the best model for estimating the flammability zone boundaries is the empirical method. It is shown that is fits the limiting oxygen concentration (LOC), upper oxygen limit (UOL), and the lower oxygen limit (LOL) quite well. The regression coefficient values for the fits to the LOC, UOL, and LOL are 0.672, 0.968, and 0.959, respectively. This is better than the fit of the "zyLFL" method for the LOC in which the regression coefficient’s value is 0.416.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for a stronger and more durable building material is becoming more important as the structural engineering field expands and challenges the behavioral limits of current materials. One of the demands for stronger material is rooted in the effects that dynamic loading has on a structure. High strain rates on the order of 101 s-1 to 103 s-1, though a small part of the overall types of loading that occur anywhere between 10-8 s-1 to 104 s-1 and at any point in a structures life, have very important effects when considering dynamic loading on a structure. High strain rates such as these can cause the material and structure to behave differently than at slower strain rates, which necessitates the need for the testing of materials under such loading to understand its behavior. Ultra high performance concrete (UHPC), a relatively new material in the U.S. construction industry, exhibits many enhanced strength and durability properties compared to the standard normal strength concrete. However, the use of this material for high strain rate applications requires an understanding of UHPC’s dynamic properties under corresponding loads. One such dynamic property is the increase in compressive strength under high strain rate load conditions, quantified as the dynamic increase factor (DIF). This factor allows a designer to relate the dynamic compressive strength back to the static compressive strength, which generally is a well-established property. Previous research establishes the relationships for the concept of DIF in design. The generally accepted methodology for obtaining high strain rates to study the enhanced behavior of compressive material strength is the split Hopkinson pressure bar (SHPB). In this research, 83 Cor-Tuf UHPC specimens were tested in dynamic compression using a SHPB at Michigan Technological University. The specimens were separated into two categories: ambient cured and thermally treated, with aspect ratios of 0.5:1, 1:1, and 2:1 within each category. There was statistically no significant difference in mean DIF for the aspect ratios and cure regimes that were considered in this study. DIF’s ranged from 1.85 to 2.09. Failure modes were observed to be mostly Type 2, Type 4, or combinations thereof for all specimen aspect ratios when classified according to ASTM C39 fracture pattern guidelines. The Comite Euro-International du Beton (CEB) model for DIF versus strain rate does not accurately predict the DIF for UHPC data gathered in this study. Additionally, a measurement system analysis was conducted to observe variance within the measurement system and a general linear model analysis was performed to examine the interaction and main effects that aspect ratio, cannon pressure, and cure method have on the maximum dynamic stress.