978 resultados para multivariate null intercepts model
Resumo:
Nucleotides, such as adenosine triphosphate (ATP), are released by cellular injury, bind to purinergic receptors expressed on hepatic parenchymal and nonparenchymal cells, and modulate cellular crosstalk. Liver resection and resulting cellular stress initiate such purinergic signaling responses between hepatocytes and innate immune cells, which regulate and ultimately drive liver regeneration. We studied a murine model of partial hepatectomy using immunodeficient mice to determine the effects of natural killer (NK) cell-mediated purinergic signaling on liver regeneration. We noted first that liver NK cells undergo phenotypic changes post-partial hepatectomy (PH) in vivo, including increased cytotoxicity and more immature phenotype manifested by alterations in the expression of CD107a, CD27, CD11b, and CD16. Hepatocellular proliferation is significantly decreased in Rag2/common gamma-null mice (lacking T, B, and NK cells) when compared to wildtype and Rag1-null mice (lacking T and B cells but retaining NK cells). Extracellular ATP levels are elevated post-PH and NK cell cytotoxicity is substantively increased in vivo in response to hydrolysis of extracellular ATP levels by apyrase (soluble NTPDase). Moreover, liver regeneration is significantly increased by the scavenging of extracellular ATP in wildtype mice and in Rag2/common gamma-null mice after adoptive transfer of NK cells. Blockade of NKG2D-dependent interactions significantly decreased hepatocellular proliferation. In vitro, NK cell cytotoxicity is inhibited by extracellular ATP in a manner dependent on P2Y1, P2Y2, and P2X3 receptor activation. Conclusion: We propose that hepatic NK cells are activated and cytotoxic post-PH and support hepatocellular proliferation. NK cell cytotoxicity is, however, attenuated by hepatic release of extracellular ATP by way of the activation of specific P2 receptors. Clearance of extracellular ATP elevates NK cell cytotoxicity and boosts liver regeneration.
Resumo:
Reactive oxygen species (ROS) have been implemented in the etiology of pulmonary fibrosis (PF) in systemic sclerosis. In the bleomycin model, we evaluated the role of acquired mutations in mitochondrial DNA (mtDNA) and respiratory chain defects as a trigger of ROS formation and fibrogenesis. Adult male Wistar rats received a single intratracheal instillation of bleomycin and their lungs were examined at different time points. Ashcroft scores, collagen and TGFβ1 levels documented a delayed onset of PF by day 14. In contrast, increased malon dialdehyde as a marker of ROS formation was detectable as early as 24 hours after bleomycin instillation and continued to increase. At day 7, lung tissue acquired significant amounts of mtDNA deletions, translating into a significant dysfunction of mtDNA-encoded, but not nucleus-encoded respiratory chain subunits. mtDNA deletions and markers of mtDNA-encoded respiratory chain dysfunction significantly correlated with pulmonary TGFβ1 concentrations and predicted PF in a multivariate model.
Resumo:
OBJECTIVE Reliable tools to predict long-term outcome among patients with well compensated advanced liver disease due to chronic HCV infection are lacking. DESIGN Risk scores for mortality and for cirrhosis-related complications were constructed with Cox regression analysis in a derivation cohort and evaluated in a validation cohort, both including patients with chronic HCV infection and advanced fibrosis. RESULTS In the derivation cohort, 100/405 patients died during a median 8.1 (IQR 5.7-11.1) years of follow-up. Multivariate Cox analyses showed age (HR=1.06, 95% CI 1.04 to 1.09, p<0.001), male sex (HR=1.91, 95% CI 1.10 to 3.29, p=0.021), platelet count (HR=0.91, 95% CI 0.87 to 0.95, p<0.001) and log10 aspartate aminotransferase/alanine aminotransferase ratio (HR=1.30, 95% CI 1.12 to 1.51, p=0.001) were independently associated with mortality (C statistic=0.78, 95% CI 0.72 to 0.83). In the validation cohort, 58/296 patients with cirrhosis died during a median of 6.6 (IQR 4.4-9.0) years. Among patients with estimated 5-year mortality risks <5%, 5-10% and >10%, the observed 5-year mortality rates in the derivation cohort and validation cohort were 0.9% (95% CI 0.0 to 2.7) and 2.6% (95% CI 0.0 to 6.1), 8.1% (95% CI 1.8 to 14.4) and 8.0% (95% CI 1.3 to 14.7), 21.8% (95% CI 13.2 to 30.4) and 20.9% (95% CI 13.6 to 28.1), respectively (C statistic in validation cohort = 0.76, 95% CI 0.69 to 0.83). The risk score for cirrhosis-related complications also incorporated HCV genotype (C statistic = 0.80, 95% CI 0.76 to 0.83 in the derivation cohort; and 0.74, 95% CI 0.68 to 0.79 in the validation cohort). CONCLUSIONS Prognosis of patients with chronic HCV infection and compensated advanced liver disease can be accurately assessed with risk scores including readily available objective clinical parameters.
Resumo:
The mammalian Forkhead Box (Fox) transcription factor (FoxM1) is implicated in tumorgenesis. However, the role and regulation of FoxM1 in gastric cancer remain unknown.^ I examined FoxM1 expression in 86 cases of primary gastric cancer and 57 normal gastric tissue specimens. I found weak expression of FoxM1 protein in normal gastric mucosa, whereas I observed strong staining for FoxM1 in tumor-cell nuclei in various gastric tumors and lymph node metastases. The aberrant FoxM1 expression is associated with VEGF expression and increased angiogenesis in human gastric cancer. A Cox proportional hazards model revealed that FoxM1 expression was an independent prognostic factor in multivariate analysis. Furthermore, overexpression of FoxM1 by gene transfer significantly promoted the growth and metastasis of gastric cancer cells in orthotopic mouse models, whereas knockdown of FoxM1 expression by small interfering RNA did the opposite. Next, I observed that alteration of tumor growth and metastasis by elevated FoxM1 expression was directly correlated with alteration of VEGF expression and angiogenesis. In addition, promotion of gastric tumorigenesis by FoxM1 directly and significantly correlated with transactivation of vascular endothelial growth factor (VEGF) expression and elevation of angiogenesis. ^ To further investigate the underlying mechanisms that result in FoxM1 overexpression in gastric cancer, I investigated FoxM1 and Krüppel-like factor 4 (KLF4) expressions in primary gastric cancer and normal gastric tissue specimens. Concomitance of increased expression of FoxM1 protein and decreased expression of KLF4 protein was evident in human gastric cancer. Enforced KLF4 expression suppressed FoxM1 protein expression. Moreover, a region within the proximal FoxM1 promoter was identified to have KLF4-binding sites. Finally, I found an increased FoxM1 expression in gastric mucosa of villin-Cre -directed tissue specific Klf4-null mice.^ In summary, I offered both clinical and mechanistic evidence that dysregulated expression of FoxM1 play an important role in gastric cancer development and progression, while KLF4 mediates negative regulation of FoxM1 expression and its loss significantly contributes to FoxM1 dysregulation. ^
Resumo:
The role of clinical chemistry has traditionally been to evaluate acutely ill or hospitalized patients. Traditional statistical methods have serious drawbacks in that they use univariate techniques. To demonstrate alternative methodology, a multivariate analysis of covariance model was developed and applied to the data from the Cooperative Study of Sickle Cell Disease.^ The purpose of developing the model for the laboratory data from the CSSCD was to evaluate the comparability of the results from the different clinics. Several variables were incorporated into the model in order to control for possible differences among the clinics that might confound any real laboratory differences.^ Differences for LDH, alkaline phosphatase and SGOT were identified which will necessitate adjustments by clinic whenever these data are used. In addition, aberrant clinic values for LDH, creatinine and BUN were also identified.^ The use of any statistical technique including multivariate analysis without thoughtful consideration may lead to spurious conclusions that may not be corrected for some time, if ever. However, the advantages of multivariate analysis far outweigh its potential problems. If its use increases as it should, the applicability to the analysis of laboratory data in prospective patient monitoring, quality control programs, and interpretation of data from cooperative studies could well have a major impact on the health and well being of a large number of individuals. ^
Resumo:
This research examined to what extent Health Belief Model (HBM) and socioeconomic variables were useful in explaining the choice whether or not more effective contraceptive methods were used among married fecund women intending no additional births. The source of the data was the 1976 National Survey of Family Growth conducted under the auspices of the National Center for Health Statistics. Using the HBM as a framework for multivariate analyses limited support was found (using available measures) that the HBM components of motivation and perceived efficacy influence the likelihood of more effective contraceptive method use. Support was also found that modifying variables suggested by the HBM can influence the effects of HBM components on the likelihood of more effective method use. Socioeconomic variables were found, using all cases and some subgroups, to have a significant additional influence on the likelihood of use of more effective methods. Limited support was found for the concept that the greater the opportunity costs of an unwanted birth the greater the likelihood of use of more effective contraceptive methods. This research supports the use of HBM and socioeconomic variables to explain the likelihood of a protective health behavior, use of more effective contraception if no additional births are intended.^
Resumo:
Campus behavior management is important for ensuring classroom order and promoting positive academic outcomes. Previous studies have shown the importance of individual student and campus personnel characteristics and campus context for explaining campus discipline rates (e.g., rates of suspension and expulsion). Assessing campus discipline rates, while controlling for these individual and campus characteristics, is important for the monitoring, evaluation, and intervention role of policymakers as well as state and federal level education agencies. Systems or metrics exist that measure other student outcomes (i.e., academic performance) with controls for individual and campus characteristics, but none exist that monitor these differences for discipline rates across campuses. In this paper, we use a multivariate model to analyze a longitudinal, statewide dataset for all secondary students in Texas from 2000 to 2008 in order to examine how campus discipline rates differ across schools with statistically similar students, teachers, and campus characteristics. The findings are important for understanding that some schools with similar characteristics have significantly different exclusionary discipline rates, and they are important for informing policy and agency level decision-making. The methodology described can easily be used by monitoring agencies as well as local school districts.
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.
Resumo:
A multiproxy study of palaeoceanographic and climatic changes in northernmost Baffin Bay shows that major environmental changes have occurred since the deglaciation of the area at about 12 500 cal. yr BP. The interpretation is based on sedimentology, benthic and planktonic foraminifera and their isotopic composition, as well as diatom assemblages in the sedimentary records at two core sites, one located in the deeper central part of northernmost Baffin Bay and one in a separate trough closer to the Greenland coast. A revised chronology for the two records is established on the basis of 15 previously published AMS 14C age determinations. A basal diamicton is overlain by laminated, fossil-free sediments. Our data from the early part of the fossiliferous record (12 300 - 11 300 cal. yr BP), which is also initially laminated, indicate extensive seasonal sea-ice cover and brine release. There is indication of a cooling event between 11 300 and 10 900 cal. yr BP, and maximum Atlantic Water influence occurred between 10 900 and 8200 cal. yr BP (no sediment recovery between 8200 and 7300 cal. yr BP). A gradual, but fluctuating, increase in sea-ice cover is seen after 7300 cal. yr BP. Sea-ice diatoms were particularly abundant in the central part of northernmost Baffin Bay, presumably due to the inflow of Polar waters from the Arctic Ocean, and less sea ice occurred at the near-coastal site, which was under continuous influence of the West Greenland Current. Our data from the deep, central part show a fluctuating degree of upwelling after c. 7300 cal. yr BP, culminating between 4000 and 3050 cal. yr BP. There was a gradual increase in the influence of cold bottom waters from the Arctic Ocean after about 3050 cal. yr BP, when agglutinated foraminifera became abundant. A superimposed short-term change in the sea-surface proxies is correlated with the Little Ice Age cooling.
Resumo:
Interannual environmental variability in Peru is dominated by the El Niño Southern Oscillation (ENSO). The most dramatic changes are associated with the warm El Niño (EN) phase (opposite the cold La Niña phase), which disrupts the normal coastal upwelling and affects the dynamics of many coastal marine and terrestrial resources. This study presents a trophic model for Sechura Bay, located at the northern extension of the Peruvian upwelling system, where ENSO-induced environmental variability is most extreme. Using an initial steady-state model for the year 1996, we explore the dynamics of the ecosystem through the year 2003 (including the strong EN of 1997/98 and the weaker EN of 2002/03). Based on support from literature, we force biomass of several non-trophically-mediated 'drivers' (e.g. Scallops, Benthic detritivores, Octopus, and Littoral fish) to observe whether the fit between historical and simulated changes (by the trophic model) is improved. The results indicate that the Sechura Bay Ecosystem is a relatively inefficient system from a community energetics point of view, likely due to the periodic perturbations of ENSO. A combination of high system productivity and low trophic level target species of invertebrates (i.e. scallops) and fish (i.e. anchoveta) results in high catches and an efficient fishery. The importance of environmental drivers is suggested, given the relatively small improvements in the fit of the simulation with the addition of trophic drivers on remaining functional groups' dynamics. An additional multivariate regression model is presented for the scallop Argopecten purpuratus, which demonstrates a significant correlation between both spawning stock size and riverine discharge-mediated mortality on catch levels. These results are discussed in the context of the appropriateness of trophodynamic modeling in relatively open systems, and how management strategies may be focused given the highly environmentally influenced marine resources of the region.
Resumo:
A two-dimensional finite element model of current flow in the front surface of a PV cell is presented. In order to validate this model we perform an experimental test. Later, particular attention is paid to the effects of non-uniform illumination in the finger direction which is typical in a linear concentrator system. Fill factor, open circuit voltage and efficiency are shown to decrease with increasing degree of non-uniform illumination. It is shown that these detrimental effects can be mitigated significantly by reoptimization of the number of front surface metallization fingers to suit the degree of non-uniformity. The behavior of current flow in the front surface of a cell operating at open circuit voltage under non-uniform illumination is discussed in detail.
Resumo:
An extended 3D distributed model based on distributed circuit units for the simulation of triple‐junction solar cells under realistic conditions for the light distribution has been developed. A special emphasis has been put in the capability of the model to accurately account for current mismatch and chromatic aberration effects. This model has been validated, as shown by the good agreement between experimental and simulation results, for different light spot characteristics including spectral mismatch and irradiance non‐uniformities. This model is then used for the prediction of the performance of a triple‐junction solar cell for a light spot corresponding to a real optical architecture in order to illustrate its suitability in assisting concentrator system analysis and design process.
Resumo:
The aim of this article is to propose an analytical approximate squeeze-film lubrication model of the human ankle joint for a quick assessment of the synovial pressure field and the load carrying due to the squeeze motion. The model starts from the theory of boosted lubrication for the human articular joints lubrication (Walker et al., Rheum Dis 27:512–520, 1968; Maroudas, Lubrication and wear in joints. Sector, London, 1969) and takes into account the fluid transport across the articular cartilage using Darcy’s equation to depict the synovial fluid motion through a porous cartilage matrix. The human ankle joint is assumed to be cylindrical enabling motion in the sagittal plane only. The proposed model is based on a modified Reynolds equation; its integration allows to obtain a quick assessment on the synovial pressure field showing a good agreement with those obtained numerically (Hlavacek, J Biomech 33:1415–1422, 2000). The analytical integration allows the closed form description of the synovial fluid film force and the calculation of the unsteady gap thickness.
Resumo:
Criminals are common to all societies. To fight against them the community takes different security measures as, for example, to bring about a police. Thus, crime causes a depletion of the common wealth not only by criminal acts but also because the cost of hiring a police force. In this paper, we present a mathematical model of a criminal-prone self-protected society that is divided into socio-economical classes. We study the effect of a non-null crime rate on a free-of-criminals society which is taken as a reference system. As a consequence, we define a criminal-prone society as one whose free-of-criminals steady state is unstable under small perturbations of a certain socio-economical context. Finally, we compare two alternative strategies to control crime: (i) enhancing police efficiency, either by enlarging its size or by updating its technology, against (ii) either reducing criminal appealing or promoting social classes at risk