936 resultados para errors-in-variables model
Resumo:
Lung cancer is a devastating disease with very poor prognosis. The design of better treatments for patients would be greatly aided by mouse models that closely resemble the human disease. The most common type of human lung cancer is adenocarcinoma with frequent metastasis. Unfortunately, current models for this tumor are inadequate due to the absence of metastasis. Based on the molecular findings in human lung cancer and metastatic potential of osteosarcomas in mutant p53 mouse models, I hypothesized that mice with both K-ras and p53 missense mutations might develop metastatic lung adenocarcinomas. Therefore, I incorporated both K-rasLA1 and p53RI72HΔg alleles into mouse lung cells to establish a more faithful model for human lung adenocarcinoma and for translational and mechanistic studies. Mice with both mutations ( K-rasLA1/+ p53R172HΔg/+) developed advanced lung adenocarcinomas with similar histopathology to human tumors. These lung adenocarcinomas were highly aggressive and metastasized to multiple intrathoracic and extrathoracic sites in a pattern similar to that seen in lung cancer patients. This mouse model also showed gender differences in cancer related death and developed pleural mesotheliomas in 23.2% of them. In a preclinical study, the new drug Erlotinib (Tarceva) decreased the number and size of lung lesions in this model. These data demonstrate that this mouse model most closely mimics human metastatic lung adenocarcinoma and provides an invaluable system for translational studies. ^ To screen for important genes for metastasis, gene expression profiles of primary lung adenocarcinomas and metastases were analyzed. Microarray data showed that these two groups were segregated in gene expression and had 79 highly differentially expressed genes (more than 2.5 fold changes and p<0.001). Microarray data of Bub1b, Vimentin and CCAM1 were validated in tumors by quantitative real-time PCR (QPCR). Bub1b , a mitotic checkpoint gene, was overexpressed in metastases and this correlated with more chromosomal abnormalities in metastatic cells. Vimentin, a marker of epithelial-mesenchymal transition (EMT), was also highly expressed in metastases. Interestingly, Twist, a key EMT inducer, was also highly upregulated in metastases by QPCR, and this significantly correlated with the overexpression of Vimentin in the same tumors. These data suggest EMT occurs in lung adenocarcinomas and is a key mechanism for the development of metastasis in K-ras LA1/+ p53R172HΔg/+ mice. Thus, this mouse model provides a unique system to further probe the molecular basis of metastatic lung cancer.^
High-resolution microarray analysis of chromosome 20q in human colon cancer metastasis model systems
Resumo:
Amplification of human chromosome 20q DNA is the most frequently occurring chromosomal abnormality detected in sporadic colorectal carcinomas and shows significant correlation with liver metastases. Through comprehensive high-resolution microarray comparative genomic hybridization and microarray gene expression profiling, we have characterized chromosome 20q amplicon genes associated with human colorectal cancer metastasis in two in vitro metastasis model systems. The results revealed increasing complexity of the 20q genomic profile from the primary tumor-derived cell lines to the lymph node and liver metastasis derived cell lines. Expression analysis of chromosome 20q revealed a subset of over expressed genes residing within the regions of genomic copy number gain in all the tumor cell lines, suggesting these are Chromosome 20q copy number responsive genes. Bases on their preferential expression levels in the model system cell lines and known biological function, four of the over expressed genes mapping to the common intervals of genomic copy gain were considered the most promising candidate colorectal metastasis-associated genes. Validation of genomic copy number and expression array data was carried out on these genes, with one gene, DNMT3B, standing out as expressed at a relatively higher levels in the metastasis-derived cell lines compared with their primary-derived counterparts in both the models systems analyzed. The data provide evidence for the role of chromosome 20q genes with low copy gain and elevated expression in the clonal evolution of metastatic cells and suggests that such genes may serve as early biomarkers of metastatic potential. The data also support the utility of the combined microarray comparative genomic hybridization and expression array analysis for identifying copy number responsive genes in areas of low DNA copy gain in cancer cells. ^
Resumo:
A multivariate frailty hazard model is developed for joint-modeling of three correlated time-to-event outcomes: (1) local recurrence, (2) distant recurrence, and (3) overall survival. The term frailty is introduced to model population heterogeneity. The dependence is modeled by conditioning on a shared frailty that is included in the three hazard functions. Independent variables can be included in the model as covariates. The Markov chain Monte Carlo methods are used to estimate the posterior distributions of model parameters. The algorithm used in present application is the hybrid Metropolis-Hastings algorithm, which simultaneously updates all parameters with evaluations of gradient of log posterior density. The performance of this approach is examined based on simulation studies using Exponential and Weibull distributions. We apply the proposed methods to a study of patients with soft tissue sarcoma, which motivated this research. Our results indicate that patients with chemotherapy had better overall survival with hazard ratio of 0.242 (95% CI: 0.094 - 0.564) and lower risk of distant recurrence with hazard ratio of 0.636 (95% CI: 0.487 - 0.860), but not significantly better in local recurrence with hazard ratio of 0.799 (95% CI: 0.575 - 1.054). The advantages and limitations of the proposed models, and future research directions are discussed. ^
Resumo:
Pulmonary fibrosis is a devastating and lethal lung disease with no current cure. Research into cellular signaling pathways able to modulate aspects of pulmonary inflammation and fibrosis will aid in the development of effective therapies for its treatment. Our laboratory has generated a transgenic/knockout mouse with systemic elevations in adenosine due to the partial lack of its metabolic enzyme, adenosine deaminase (ADA). These mice spontaneously develop progressive lung inflammation and severe pulmonary fibrosis suggesting that aberrant adenosine signaling is influencing the development and/or progression of the disease in these animals. These mice also show marked increases in the pro-fibrotic mediator, osteopontin (OPN), which are reversed through ADA therapy that serves to lower lung adenosine levels and ameliorate aspects of the disease. OPN is known to be regulated by intracellular signaling pathways that can be accessed through adenosine receptors, particularly the low affinity A2BR receptor, suggesting that adenosine receptor signaling may be responsible for the induction of OPN in our model. In-vitro, adenosine and the broad spectrum adenosine receptor agonist, NECA, were able to induce a 2.5-fold increase in OPN transcripts in primary alveolar macrophages. This induction was blocked through antagonism of the A2BR receptor pharmacologically, and through the deletion of the receptor subtype in these cells genetically, supporting the hypothesis that the A2BR receptor was responsible for the induction of OPN in our model. These findings demonstrate for the first time that adenosine signaling is an important modulator of pulmonary fibrosis in ADA-deficient mice and that this is in part due to signaling through the A2BR receptor which leads to the induction of the pro-fibrotic molecule, otseopontin. ^
Resumo:
Diabetic nephropathy is the most common cause of end-stage renal disease (ESRD) in the United States. African-Americans and patients with type 1 diabetes (T1D) are at increased risk. We studied the rate and factors that influenced progression of glomerular filtration rate (GFR) in 401 African-American T1D patients who were followed for 6 years through the observational cohort New Jersey 725 study. Patients with ESRD and/or GFR<20 ml/min were excluded. The mean (SD) baseline GFR was 106.8 (27.04) ml/min and it decreased by 13.8 (mean, SD 32.2) ml/min during the 6-year period (2.3 ml/min/year). In patients with baseline macroproteinuria, GFR decreased by 31.8 (39.0) ml/min (5.3 ml/min/year) compared to 8.2 (mean, SD 27.6) ml/min (1.3 ml/min/year) in patients without it (p<0.00001). Six-year GFR fell to <20 ml/min in 5.25% of all patients, but in 16.8% of macroproteinuric patients.^ A model including baseline GFR, proteinuria category and hypertension category, explained 35% of the 6-year GFR variability (p<0.0001). After adjustment for other variables in the model, 6-year GFR was 24.9 ml/min lower in macroproteinuric patients than in those without proteinuria (p=0.0001), and 12.6 ml/min lower in patients with treated but uncontrolled hypertension compared to normotensive patients (p=0.003). In this sample of patients, with an elevated mean glycosylated hemoglobin of 12.4%, glycemic control did not independently influence GFR deterioration, nor did BMI, cholesterol, gender, age at diabetes onset or socioeconomic level.^ Taken together, our findings suggest that proteinuria and hypertension are the most important factors associated with GFR deterioration in African-American T1D patients.^
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
Resumo:
The association between fine particulate matter air pollution (PM2.5) and cardiovascular disease (CVD) mortality was spatially analyzed for Harris County, Texas, at the census tract level. The objective was to assess how increased PM2.5 exposure related to CVD mortality in this area while controlling for race, income, education, and age. An estimated exposure raster was created for Harris County using Kriging to estimate the PM2.5 exposure at the census tract level. The PM2.5 exposure and the CVD mortality rates were analyzed in an Ordinary Least Squares (OLS) regression model and the residuals were subsequently assessed for spatial autocorrelation. Race, median household income, and age were all found to be significant (p<0.05) predictors in the model. This study found that for every one μg/m3 increase in PM2.5 exposure, holding age and education variables constant, an increase of 16.57 CVD deaths per 100,000 would be predicted for increased minimum exposure values and an increase of 14.47 CVD deaths per 100,000 would be predicted for increased maximum exposure values. This finding supports previous studies associating PM2.5 exposure with CVD mortality. This study further identified the areas of greatest PM2.5 exposure in Harris County as being the geographical locations of populations with the highest risk of CVD (i.e., predominantly older, low-income populations with a predominance of African Americans). The magnitude of the effect of PM2.5 exposure on CVD mortality rates in the study region indicates a need for further community-level studies in Harris County, and suggests that reducing excess PM2.5 exposure would reduce CVD mortality.^
Resumo:
Objectives: To compare mental health care utilization regarding the source, types, and intensity of mental health services received, unmet need for services, and out of pocket cost among non-institutionalized psychologically distressed women and men. ^ Method: Cross-sectional data for 19,325 non-institutionalized mentally distressed adult respondents to the “The National Survey on Drug Use and Health” (NSDUH), for the years 2006 -2008, representing over twenty-nine millions U.S. adults was analyzed. To assess the relative odds for women compared to men, logistic regression analysis was used for source of service, for types of barriers, for unmet need and cost; zero inflated negative binomial regression for intensity of utilization; and ordinal logistic regression analysis for quantifying out-of-pocket expenditure. ^ Results: Overall, 43% of mentally distressed adults utilized a form of mental health treatment; representing 12.6 million U.S psychologically distressed adults. Females utilized more mental health care compared to males in the previous 12 months (OR: 1. 70; 95% CI: 1.54, 1.83). Similarly, females were 54% more likely to get help for psychological distress in an outpatient setting and females were associated with an increased probability of using medication for mental distress (OR: 1.72; 95% CI: 1.63, 1.98). Women were 1.25 times likelier to visit a mental health center (specialty care) than men. ^ Females were positively associated with unmet needs (OR: 1.50; 95% CI: 1.29, 1.75) after taking into account predisposing, enabling, and need (PEN) characteristics. Women with perceived unmet needs were 23% (OR: 0.77; 95% CI: 0.59, 0.99) less likely than men to report societal accommodation (stigma) as a barrier to mental health care. At any given cutoff point, women were 1.74 times likelier to be in the higher payment categories for inpatient out of pocket cost when other variables in the model are held constant. Conclusions: Women utilize more specialty mental healthcare, report more unmet need, and pay more inpatient out of pocket costs than men. These gender disparities exist even after controlling for predisposing, enabling, and need variables. Creating policies that not only provide mental health care access but also de-stigmatize mental illness will bring us one step closer to eliminating gender disparities in mental health care.^
Resumo:
The association between increases in cerebral glucose metabolism and the development of acidosis is largely inferential, based on reports linking hyperglycemia with poor neurological outcome, lactate accumulation, and the severity of acidosis. We measured local cerebral metabolic rate for glucose (lCMRglc) and an index of brain pH--the acid-base index (ABI)--concurrently and characterized their interaction in a model of focal cerebral ischemia in rats in a double-label autoradiographic study, using ($\sp{14}$C) 2-deoxyglucose and ($\sp{14}$C) dimethyloxazolidinedione. Computer-assisted digitization and analysis permitted the simultaneous quantification of the two variables on a pixel-by-pixel basis in the same brain slices. Hemispheres ipsilateral to tamponade-induced middle cerebral occlusion showed areas of normal, depressed and elevated glucose metabolic rate (as defined by an interhemispheric asymmetry index) after two hours of ischemia. Regions of normal glucose metabolic rate showed normal ABI (pH $\pm$ SD = 6.97 $\pm$ 0.09), regions of depressed lCMRglc showed severe acidosis (6.69 $\pm$ 0.14), and regions of elevated lCMRglc showed moderate acidosis (6.88 $\pm$ 0.10), all significantly different at the.00125 level as shown by analysis of variance. Moderate acidosis in regions of increased lCMRglc suggests that anaerobic glycolysis causes excess protons to be generated by the uncoupling of ATP synthesis and hydrolysis. ^
Resumo:
This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^
Resumo:
Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.
Resumo:
High-latitude ecosystems play an important role in the global carbon cycle and in regulating the climate system and are presently undergoing rapid environmental change. Accurate land cover data sets are required to both document these changes as well as to provide land-surface information for benchmarking and initializing Earth system models. Earth system models also require specific land cover classification systems based on plant functional types (PFTs), rather than species or ecosystems, and so post-processing of existing land cover data is often required. This study compares over Siberia, multiple land cover data sets against one another and with auxiliary data to identify key uncertainties that contribute to variability in PFT classifications that would introduce errors in Earth system modeling. Land cover classification systems from GLC 2000, GlobCover 2005 and 2009, and MODIS collections 5 and 5.1 are first aggregated to a common legend, and then compared to high-resolution land cover classification systems, vegetation continuous fields (MODIS VCFs) and satellite-derived tree heights (to discriminate against sparse, shrub, and forest vegetation). The GlobCover data set, with a lower threshold for tree cover and taller tree heights and a better spatial resolution, tends to have better distributions of tree cover compared to high-resolution data. It has therefore been chosen to build new PFT maps for the ORCHIDEE land surface model at 1 km scale. Compared to the original PFT data set, the new PFT maps based on GlobCover 2005 and an updated cross-walking approach mainly differ in the characterization of forests and degree of tree cover. The partition of grasslands and bare soils now appears more realistic compared with ground truth data. This new vegetation map provides a framework for further development of new PFTs in the ORCHIDEE model like shrubs, lichens and mosses, to represent the water and carbon cycles in northern latitudes better. Updated land cover data sets are critical for improving and maintaining the relevance of Earth system models for assessing climate and human impacts on biogeochemistry and biophysics.
Resumo:
The characteristics of a global set-up of the Finite-Element Sea-Ice Ocean Model under forcing of the period 1958-2004 are presented. The model set-up is designed to study the variability in the deep-water mass formation areas and was therefore regionally better resolved in the deep-water formation areas in the Labrador Sea, Greenland Sea, Weddell Sea and Ross Sea. The sea-ice model reproduces realistic sea-ice distributions and variabilities in the sea-ice extent of both hemispheres as well as sea-ice transport that compares well with observational data. Based on a comparison between model and ocean weather ship data in the North Atlantic, we observe that the vertical structure is well captured in areas with a high resolution. In our model set-up, we are able to simulate decadal ocean variability including several salinity anomaly events and corresponding fingerprint in the vertical hydrography. The ocean state of the model set-up features pronounced variability in the Atlantic Meridional Overturning Circulation as well as the associated mixed layer depth pattern in the North Atlantic deep-water formation areas.
Resumo:
The main purpose of robot calibration is the correction of the possible errors in the robot parameters. This paper presents a method for a kinematic calibration of a parallel robot that is equipped with one camera in hand. In order to preserve the mechanical configuration of the robot, the camera is utilized to acquire incremental positions of the end effector from a spherical object that is fixed in the word reference frame. The positions of the end effector are related to incremental positions of resolvers of the motors of the robot, and a kinematic model of the robot is used to find a new group of parameters which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and improving spatial measurements. Finally, the robotic system is designed to carry out tracking tasks and the calibration of the robot is validated by means of integrating the errors of the visual controller.
Resumo:
During the Peninsular War, Napoleon's and Wellington's armies were aware of the lack of precision in the maps of Spain and its provinces that appeared in Tomas Lopez \s Geographical Atlas of Spain. The errors were due to the non-topographical surveying method he used which he had learned from his teacher Jean Baptiste Bourguignon D 'Anville. To map all of the Spanish provinces, Tomas Lopez divided them into circles of three leagues in diameter (16,718 m), taking a particular town as the centre. He asked the town's priest to draw a map of the territory and to complete a questionnaire that Tomas Lopez sent to him. The priest was to return the two documents after he had completed them. Subsequently, at his desk, Tomas Lopez used the maps and reports as well as other graphic and written sources from various locations to make an outline of each map. Next, he made a mosaic that served as a pattern for drawing the final provincial map. We will see the way that this method was applied in two concrete cases: the villages ofChavaler and Monteagudo, situated in the Spanish province of Soria, and verify their degree of accuracy. We will use the maps drawn by the priests in 1767, the final map of the province which was published in 1804 by Tomás López, and a current map of the province showing the angular and linear errors in Lopez \s map.