929 resultados para location-dependent data query
Resumo:
In the United States, “binge” drinking among college students is an emerging public health concern due to the significant physical and psychological effects on young adults. The focus is on identifying interventions that can help decrease high-risk drinking behavior among this group of drinkers. One such intervention is Motivational interviewing (MI), a client-centered therapy that aims at resolving client ambivalence by developing discrepancy and engaging the client in change talk. Of late, there is a growing interest in determining the active ingredients that influence the alliance between the therapist and the client. This study is a secondary analysis of the data obtained from the Southern Methodist Alcohol Research Trial (SMART) project, a dismantling trial of MI and feedback among heavy drinking college students. The present project examines the relationship between therapist and client language in MI sessions on a sample of “binge” drinking college students. Of the 126 SMART tapes, 30 tapes (‘MI with feedback’ group = 15, ‘MI only’ group = 15) were randomly selected for this study. MISC 2.1, a mutually exclusive and exhaustive coding system, was used to code the audio/videotaped MI sessions. Therapist and client language were analyzed for communication characteristics. Overall, therapists adopted a MI consistent style and clients were found to engage in change talk. Counselor acceptance, empathy, spirit, and complex reflections were all significantly related to client change talk (p-values ranged from 0.001 to 0.047). Additionally, therapist ‘advice without permission’ and MI Inconsistent therapist behaviors were strongly correlated with client sustain talk (p-values ranged from 0.006 to 0.048). Simple linear regression models showed a significant correlation between MI consistent (MICO) therapist language (independent variable) and change talk (dependent variable) and MI inconsistent (MIIN) therapist language (independent variable) and sustain talk (dependent variable). The study has several limitations such as small sample size, self-selection bias, poor inter-rater reliability for the global scales and the lack of a temporal measure of therapist and client language. Future studies might consider a larger sample size to obtain more statistical power. In addition the correlation between therapist language, client language and drinking outcome needs to be explored.^
Resumo:
Microarray technology is a high-throughput method for genotyping and gene expression profiling. Limited sensitivity and specificity are one of the essential problems for this technology. Most of existing methods of microarray data analysis have an apparent limitation for they merely deal with the numerical part of microarray data and have made little use of gene sequence information. Because it's the gene sequences that precisely define the physical objects being measured by a microarray, it is natural to make the gene sequences an essential part of the data analysis. This dissertation focused on the development of free energy models to integrate sequence information in microarray data analysis. The models were used to characterize the mechanism of hybridization on microarrays and enhance sensitivity and specificity of microarray measurements. ^ Cross-hybridization is a major obstacle factor for the sensitivity and specificity of microarray measurements. In this dissertation, we evaluated the scope of cross-hybridization problem on short-oligo microarrays. The results showed that cross hybridization on arrays is mostly caused by oligo fragments with a run of 10 to 16 nucleotides complementary to the probes. Furthermore, a free-energy based model was proposed to quantify the amount of cross-hybridization signal on each probe. This model treats cross-hybridization as an integral effect of the interactions between a probe and various off-target oligo fragments. Using public spike-in datasets, the model showed high accuracy in predicting the cross-hybridization signals on those probes whose intended targets are absent in the sample. ^ Several prospective models were proposed to improve Positional Dependent Nearest-Neighbor (PDNN) model for better quantification of gene expression and cross-hybridization. ^ The problem addressed in this dissertation is fundamental to the microarray technology. We expect that this study will help us to understand the detailed mechanism that determines sensitivity and specificity on the microarrays. Consequently, this research will have a wide impact on how microarrays are designed and how the data are interpreted. ^
Resumo:
Research studies on the association between exposures to air contaminants and disease frequently use worn dosimeters to measure the concentration of the contaminant of interest. But investigation of exposure determinants requires additional knowledge beyond concentration, i.e., knowledge about personal activity such as whether the exposure occurred in a building or outdoors. Current studies frequently depend upon manual activity logging to record location. This study's purpose was to evaluate the use of a worn data logger recording three environmental parameters—temperature, humidity, and light intensity—as well as time of day, to determine indoor or outdoor location, with an ultimate aim of eliminating the need to manually log location or at least providing a method to verify such logs. For this study, data collection was limited to a single geographical area (Houston, Texas metropolitan area) during a single season (winter) using a HOBO H8 four-channel data logger. Data for development of a Location Model were collected using the logger for deliberate sampling of programmed activities in outdoor, building, and vehicle locations at various times of day. The Model was developed by analyzing the distributions of environmental parameters by location and time to establish a prioritized set of cut points for assessing locations. The final Model consisted of four "processors" that varied these priorities and cut points. Data to evaluate the Model were collected by wearing the logger during "typical days" while maintaining a location log. The Model was tested by feeding the typical day data into each processor and generating assessed locations for each record. These assessed locations were then compared with true locations recorded in the manual log to determine accurate versus erroneous assessments. The utility of each processor was evaluated by calculating overall error rates across all times of day, and calculating individual error rates by time of day. Unfortunately, the error rates were large, such that there would be no benefit in using the Model. Another analysis in which assessed locations were classified as either indoor (including both building and vehicle) or outdoor yielded slightly lower error rates that still precluded any benefit of the Model's use.^
Resumo:
NADPH cytochrome P-450 reductase releases FMN and FAD upon dilution into slightly acidic potassium bromide. The flavins are released with positive cooperativity. Dithiothreitol protects the FAD dependent cytochrome c reductase activity against inactivation by free radicals. Behavior in potassium bromide is sensitive to changes in the pH. High performance hydroxylapatite resolved the FAD dependent reductase from holoreductase. For 96% FAD dependent reductase, the overall yield was 12%.^ High FAD dependence was matched by a low FAD content, with FAD/FMN as low as 0.015. There were three molecules of FMN for every four molecules of reductase. The aporeductase had negligible activity towards cytochrome c, ferricyanide, menadione, dichlorophenolindophenol, nitro blue tetrazolium, oxygen and acetyl pyridine adenine dinucleotide phosphate. A four minute incubation in FAD reconstituted one half to all of the specific activity, per milligram protein, of untreated reductase, depending upon the substrate. After a two hour reconstitution, the reductase eluted from hydroxylapatite at the location of holoreductase. It had little flavin dependence, was equimolar in FMN and FAD, and had nearly the specific activity (per mole flavin) of untreated reductase.^ The lack of activity and the ability of FMN to also reconstitute suggest that the redox center of FAD is essential for catalysis, rather than for structure. Dependence upon FAD is consistent with existing hypotheses for the catalytic cycle of the reductase. ^
Resumo:
This project is based on secondary analyses of data collected in Starr County, Texas from 1981 till 1991 to determine the prevalence, incidence and risk factors for macular edema in Hispanics with non-insulin-dependent diabetes in Starr County, Texas. Two studies were conducted. The first study examined the prevalence of macular edema in this population. Of the 310 diabetics that were included in the study 22 had macular edema. Of these 22 individuals 9 had clinically significant macular edema. Fasting blood glucose was found to be significantly associated with macular edema. For each 10 mg/dl increase in fasting blood glucose there was a 1.07 probability of an increase in the risk of having macular edema. Individuals with fasting blood glucose $\ge$200 mg/dl were found to be more than three times at risk of having macular edema compared to those with fasting blood glucose $<$200 mg/dl.^ In the second study the incidence and the risk factors that could cause macular edema in this Hispanic population were examined. 240 Hispanics with non-insulin-dependent diabetes mellitus and without macular edema were followed for 1223 person-years. During the follow-up period 27 individuals developed macular edema (2.21/100 person-years). High fasting blood glucose and glycosylated hemoglobin were found to be strong and independent risk factors for macular edema. Participants taking insulin were 3.9 times more at risk of developing macular edema compared to those not taking insulin. Systolic blood pressure was significantly related to macular edema, where each 10 mmHg increase in systolic blood pressure was associated with a 1.3 increase in the risk of macular edema.^ In summary, this study suggests that hyperglycemia is the main underlying factor for retinal pathological changes in this diabetic population, and that macular edema probably is not the result of sudden change in the blood glucose level. It also determined that changes in blood pressure, particularly systolic blood pressure, could trigger the development of macular edema.^ Based on the prevalence reported in this study, it is estimated that 35,500 Hispanic diabetics in the US have macular edema. This imposes a major public health challenge particularly in areas with high concentration of Mexican Americans. It also highlights the importance of public health measures directed to Mexican Americans such as health education, improved access to medical care, and periodic and careful ophthalmologic examination by ophthalmologists knowledgeable and experienced in the management of diabetic macular edema. ^
Resumo:
Sizes and power of selected two-sample tests of the equality of survival distributions are compared by simulation for small samples from unequally, randomly-censored exponential distributions. The tests investigated include parametric tests (F, Score, Likelihood, Asymptotic), logrank tests (Mantel, Peto-Peto), and Wilcoxon-Type tests (Gehan, Prentice). Equal sized samples, n = 18, 16, 32 with 1000 (size) and 500 (power) simulation trials, are compared for 16 combinations of the censoring proportions 0%, 20%, 40%, and 60%. For n = 8 and 16, the Asymptotic, Peto-Peto, and Wilcoxon tests perform at nominal 5% size expectations, but the F, Score and Mantel tests exceeded 5% size confidence limits for 1/3 of the censoring combinations. For n = 32, all tests showed proper size, with the Peto-Peto test most conservative in the presence of unequal censoring. Powers of all tests are compared for exponential hazard ratios of 1.4 and 2.0. There is little difference in power characteristics of the tests within the classes of tests considered. The Mantel test showed 90% to 95% power efficiency relative to parametric tests. Wilcoxon-type tests have the lowest relative power but are robust to differential censoring patterns. A modified Peto-Peto test shows power comparable to the Mantel test. For n = 32, a specific Weibull-exponential comparison of crossing survival curves suggests that the relative powers of logrank and Wilcoxon-type tests are dependent on the scale parameter of the Weibull distribution. Wilcoxon-type tests appear more powerful than logrank tests in the case of late-crossing and less powerful for early-crossing survival curves. Guidelines for the appropriate selection of two-sample tests are given. ^
Resumo:
A general model for the illness-death stochastic process with covariates has been developed for the analysis of survival data. This model incorporates important baseline and time-dependent covariates to make proper adjustment for the transition probabilities and survival probabilities. The follow-up period is subdivided into small intervals and a constant hazard is assumed for each interval. An approximation formula is derived to estimate the transition parameters when the exact transition time is unknown.^ The method developed is illustrated by using data from a study on the prevention of the recurrence of a myocardial infarction and subsequent mortality, the Beta-Blocker Heart Attack Trial (BHAT). This method provides an analytical approach which simultaneously includes provision for both fatal and nonfatal events in the model. According to this analysis, the effectiveness of the treatment can be compared between the Placebo and Propranolol treatment groups with respect to fatal and nonfatal events. ^
Resumo:
Background and purpose. Brain lesions in acute ischemic stroke measured by imaging tools provide important clinical information for diagnosis and final infarct volume has been considered as a potential surrogate marker for clinical outcomes. Strong correlations have been found between lesion volume and clinical outcomes in the NINDS t-PA Stroke Trial but little has been published about lesion location and clinical outcomes. Studies of the National Institute of Neurological Disorders and Stroke (NINDS) t-PA Stroke Trial data found the direction of the t-PA treatment effect on a decrease in CT lesion volume was consistent with the observed clinical effects at 3 months, but measure of t-PA treatment benefits using CT lesion volumes showed a diminished statistical significance, as compared to using clinical scales. ^ Methods. We used the global test to evaluate the hypothesis that lesion locations were strongly associated with clinical outcomes within each treatment group at 3 months after stroke. The anatomic locations of CT scans were used for analysis. We also assessed the effect of t-PA on lesion location using a global statistical test. ^ Results. In the t-PA group, patients with frontal lesions had larger infarct volumes and worse NIHSS score at 3 months after stroke. The clinical status of patients with frontal lesions in t-PA group was less likely to be affected by lesion volume, as compared to those who had no frontal lesions in at 3 months. For patients within the placebo group, both brain stem and internal capsule locations were significantly associated with a lower odd of having favorable outcomes at 3 months. Using a global test we could not detect a significant effect of t-PA treatment on lesion location although differences between two treatment groups in the proportion of lesion findings in each location were found. ^ Conclusions. Frontal, brain stem, and internal capsule locations were significantly related to clinical status at 3 months after stroke onset. We detect no significant t-PA effect on all 9 locations although proportion of lesion findings in differed among locations between the two treatment groups.^
Resumo:
It is well known that an identification problem exists in the analysis of age-period-cohort data because of the relationship among the three factors (date of birth + age at death = date of death). There are numerous suggestions about how to analyze the data. No one solution has been satisfactory. The purpose of this study is to provide another analytic method by extending the Cox's lifetable regression model with time-dependent covariates. The new approach contains the following features: (1) It is based on the conditional maximum likelihood procedure using a proportional hazard function described by Cox (1972), treating the age factor as the underlying hazard to estimate the parameters for the cohort and period factors. (2) The model is flexible so that both the cohort and period factors can be treated as dummy or continuous variables, and the parameter estimations can be obtained for numerous combinations of variables as in a regression analysis. (3) The model is applicable even when the time period is unequally spaced.^ Two specific models are considered to illustrate the new approach and applied to the U.S. prostate cancer data. We find that there are significant differences between all cohorts and there is a significant period effect for both whites and nonwhites. The underlying hazard increases exponentially with age indicating that old people have much higher risk than young people. A log transformation of relative risk shows that the prostate cancer risk declined in recent cohorts for both models. However, prostate cancer risk declined 5 cohorts (25 years) earlier for whites than for nonwhites under the period factor model (0 0 0 1 1 1 1). These latter results are similar to the previous study by Holford (1983).^ The new approach offers a general method to analyze the age-period-cohort data without using any arbitrary constraint in the model. ^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
Of the large clinical trials evaluating screening mammography efficacy, none included women ages 75 and older. Recommendations on an upper age limit at which to discontinue screening are based on indirect evidence and are not consistent. Screening mammography is evaluated using observational data from the SEER-Medicare linked database. Measuring the benefit of screening mammography is difficult due to the impact of lead-time bias, length bias and over-detection. The underlying conceptual model divides the disease into two stages: pre-clinical (T0) and symptomatic (T1) breast cancer. Treating the time in these phases as a pair of dependent bivariate observations, (t0,t1), estimates are derived to describe the distribution of this random vector. To quantify the effect of screening mammography, statistical inference is made about the mammography parameters that correspond to the marginal distribution of the symptomatic phase duration (T1). This shows the hazard ratio of death from breast cancer comparing women with screen-detected tumors to those detected at their symptom onset is 0.36 (0.30, 0.42), indicating a benefit among the screen-detected cases. ^
Resumo:
This is the reconstructed pCO2 data from Tree ring cellulose d13C data with estimation errors for 10 sites (location given below) by a geochemical model as given in the publication by Trina Bose, Supriyo Chakraborty, Hemant Borgaonkar, Saikat Sengupta. This data was generated in Stable Isotope Laboratory, Indian Institute of Tropical Meteorology, Pune - 411008, India
Resumo:
For a reliable simulation of the time and space dependent CO2 redistribution between ocean and atmosphere an appropriate time dependent simulation of particle dynamics processes is essential but has not been carried out so far. The major difficulties were the lack of suitable modules for particle dynamics and early diagenesis (in order to close the carbon and nutrient budget) in ocean general circulation models, and the lack of an understanding of biogeochemical processes, such as the partial dissolution of calcareous particles in oversaturated water. The main target of ORFOIS was to fill in this gap in our knowledge and prediction capability infrastructure. This goal has been achieved step by step. At first comprehensive data bases (already existing data) of observations of relevance for the three major types of biogenic particles, organic carbon (POC), calcium carbonate (CaCO3), and biogenic silica (BSi or opal), as well as for refractory particles of terrestrial origin were collated and made publicly available.
Resumo:
The drift of 52 icebergs tagged with GPS buoys in the Weddell Sea since 1999 has been investigated with respect to prevalent drift tracks, sea ice/iceberg interaction, and freshwater fluxes. Buoys were deployed on small- to medium-sized icebergs (edge lengths ? 5 km) in the southwestern and eastern Weddell Sea. The basin-scale iceberg drift of this size class was established. In the western Weddell Sea, icebergs followed a northward course with little deviation and mean daily drift rates up to 9.5 ± 7.3 km/d. To the west of 40°W the drift of iceberg and sea ice was coherent. In the highly consolidated perennial sea ice cover of 95% the sea ice exerted a steering influence on the icebergs and was thus responsible for the coherence of the drift tracks. The northward drift of buoys to the east of 40°W was interrupted by large deviations due to the passage of low-pressure systems. Mean daily drift rates in this area were 11.5 ± 7.2 km/d. A lower threshold of 86% sea ice concentration for coherent sea ice/iceberg movement was determined by examining the sea ice concentration derived from Special Sensor Microwave Imager (SSM/I) and Advanced Microwave Scanning Radiometer for EOS (AMSR-E) satellite data. The length scale of coherent movement was estimated to be at least 200 km, about half the value found for the Arctic Ocean but twice as large as previously suggested. The freshwater fluxes estimated from three iceberg export scenarios deduced from the iceberg drift pattern were highly variable. Assuming a transit time in the Weddell Sea of 1 year, the iceberg meltwater input of 31 Gt which is about a third of the basal meltwater input from the Filchner Ronne Ice Shelf but spreads across the entire Weddell Sea. Iceberg meltwater export of 14.2 × 103 m3 s?1, if all icebergs are exported, is in the lower range of freshwater export by sea ice.