13 resultados para Time interval module
em DigitalCommons@The Texas Medical Center
Resumo:
Can the early identification of the species of staphylococcus responsible for infection by the use of Real Time PCR technology influence the approach to the treatment of these infections? ^ This study was a retrospective cohort study in which two groups of patients were compared. The first group, ‘Physician Aware’ consisted of patients in whom physicians were informed of specific staphylococcal species and antibiotic sensitivity (using RT-PCR) at the time of notification of the gram stain. The second group, ‘Physician Unaware’ consisted of patients in whom treating physicians received the same information 24–72 hours later as a result of blood culture and antibiotic sensitivity determination. ^ The approach to treatment was compared between ‘Physician Aware’ and ‘Physician Unaware’ groups for three different microbiological diagnoses—namely MRSA, MSSA and no-SA (or coagulase negative Staphylococcus). ^ For a diagnosis of MRSA, the mean time interval to the initiation of Vancomycin therapy was 1.08 hours in the ‘Physician Aware’ group as compared to 5.84 hours in the ‘Physician Unaware’ group (p=0.34). ^ For a diagnosis of MSSA, the mean time interval to the initiation of specific anti-MSSA therapy with Nafcillin was 5.18 hours in the ‘Physician Aware’ group as compared to 49.8 hours in the ‘Physician Unaware’ group (p=0.007). Also, for the same diagnosis, the mean duration of empiric therapy in the ‘Physician Aware’ group was 19.68 hours as compared to 80.75 hours in the ‘Physician Unaware’ group (p=0.003) ^ For a diagnosis of no-SA or coagulase negative staphylococcus, the mean duration of empiric therapy was 35.65 hours in the ‘Physician Aware’ group as compared to 44.38 hours in the ‘Physician Unaware’ group (p=0.07). However, when treatment was considered a categorical variable and after exclusion of all cases where anti-MRS therapy was used for unrelated conditions, only 20 of 72 cases in the ‘Physician Aware’ group received treatment as compared to 48 of 106 cases in the ‘Physician Unaware’ group. ^ Conclusions. Earlier diagnosis of MRSA may not alter final treatment outcomes. However, earlier identification may lead to the earlier institution of measures to limit the spread of infection. The early diagnosis of MSSA infection, does lead to treatment with specific antibiotic therapy at an earlier stage of treatment. Also, the duration of empiric therapy is greatly reduced by early diagnosis. The early diagnosis of coagulase negative staphylococcal infection leads to a lower rate of unnecessary treatment for these infections as they are commonly considered contaminants. ^
Resumo:
BACKGROUND: Methylphenidate (MPD) is a psychostimulant commonly prescribed for attention deficit/hyperactivity disorder. The mode of action of the brain circuitry responsible for initiating the animals' behavior in response to psychostimulants is not well understood. There is some evidence that psychostimulants activate the ventral tegmental area (VTA), nucleus accumbens (NAc), and prefrontal cortex (PFC). METHODS: The present study was designed to investigate the acute dose-response of MPD (0.6, 2.5, and 10.0 mg/kg) on locomotor behavior and sensory evoked potentials recorded from the VTA, NAc, and PFC in freely behaving rats previously implanted with permanent electrodes. For locomotor behavior, adult male Wistar-Kyoto (WKY; n = 39) rats were given saline on experimental day 1 and either saline or an acute injection of MPD (0.6, 2.5, or 10.0 mg/kg, i.p.) on experimental day 2. Locomotor activity was recorded for 2-h post injection on both days using an automated, computerized activity monitoring system. Electrophysiological recordings were also performed in the adult male WKY rats (n = 10). Five to seven days after the rats had recovered from the implantation of electrodes, each rat was placed in a sound-insulated, electrophysiological test chamber where its sensory evoked field potentials were recorded before and after saline and 0.6, 2.5, and 10.0 mg/kg MPD injection. Time interval between injections was 90 min. RESULTS: Results showed an increase in locomotion with dose-response characteristics, while a dose-response decrease in amplitude of the components of sensory evoked field responses of the VTA, NAc, and PFC neurons. For example, the P3 component of the sensory evoked field response of the VTA decreased by 19.8% +/- 7.4% from baseline after treatment of 0.6 mg/kg MPD, 37.8% +/- 5.9% after 2.5 mg/kg MPD, and 56.5% +/- 3.9% after 10 mg/kg MPD. Greater attenuation from baseline was observed in the NAc and PFC. Differences in the intensity of MPD-induced attenuation were also found among these brain areas. CONCLUSION: These results suggest that an acute treatment of MPD produces electrophysiologically detectable alterations at the neuronal level, as well as observable, behavioral responses. The present study is the first to investigate the acute dose-response effects of MPD on behavior in terms of locomotor activity and in the brain involving the sensory inputs of VTA, NAc, and PFC neurons in intact, non-anesthetized, freely behaving rats previously implanted with permanent electrodes.
Resumo:
The magnitude of the interaction between cigarette smoking, radiation therapy, and primary lung cancer after breast cancer remains unresolved. This case control study further examines the main and joint effects of cigarette smoking and radiation therapy (XRT) among breast cancer patients who subsequently developed primary lung cancer, at The University of Texas M. D. Anderson Cancer Center (MDACC) in Houston, Texas. Cases (n = 280) were women diagnosed with primary lung cancer between 1955 and 1970, between 30–89 years of age, who had a prior history of breast cancer, and were U.S. residents. Controls (n = 300) were randomly selected from 37,000 breast cancer patients at MDACC and frequency matched to cases on age at diagnosis (in 5-year strata), ethnicity, year of breast cancer diagnosis (in 5-year strata), and had survived at least as long as the time interval for lung cancer diagnosis in the cases. Stratified analysis and unconditional logistic regression modeling were used to calculate the main and joint effects of cigarette smoking and radiation treatment on lung cancer risk. Medical record review yielded smoking information on 93% of cases and 84% of controls, and among cases 45% received XRT versus 44% of controls. Smoking increased the odds of lung cancer in women who did not receive XRT (OR = 6.0, 95%CI, 3.5–10.1) whereas XRT was not associated with increased odds (OR = 0.5, 95%CI, 0.2–1.1) in women who did not smoke. Overall the odds ratio for both XRT and smoking together compared with neither exposure was 9.00 (9 5% CI, 5.1–15.9). Similarly, when stratifying on laterality of the lung cancer in relation to the breast cancer, and when the time interval between breast and lung cancers was >10 years, there was an increased odds for both smoking and XRT together for lung cancers on the same side as the breast cancer (ipsilateral) (OR = 11.5, 95% CI, 4.9–27.8) and lung cancers on the opposite side of the breast cancer (contralateral) (OR= 9.6, 95% CI, 2.9–0.9). After 20 years the odds for the ipsilateral lung were even more pronounced (OR = 19.2, 95% CI, 4.2–88.4) compared to the contralateral lung (OR = 2.6, 95% CI, 0.2–2.1). In conclusion, smoking was a significant independent risk factor for lung cancer after breast cancer. Moreover, a greater than multiplicative effect was observed with smoking and XRT combined being especially evident after 10 years for both the ipsilateral and contralateral lung and after 20 years for the ipsilateral lung. ^
Resumo:
Objectives. Previous studies have shown a survival advantage in ovarian cancer patients with Ashkenazi-Jewish (AJ) BRCA founder mutations, compared to sporadic ovarian cancer patients. The purpose of this study was to determine if this association exists in ovarian cancer patients with non-Ashkenazi Jewish BRCA mutations. In addition, we sought to account for possible "survival bias" by minimizing any lead time that may exist between diagnosis and genetic testing. ^ Methods. Patients with stage III/IV ovarian, fallopian tube, or primary peritoneal cancer and a non-Ashkenazi Jewish BRCA1 or 2 mutation, seen for genetic testing January 1996-July 2007, were identified from genetics and institutional databases. Medical records were reviewed for clinical factors, including response to initial chemotherapy. Patients with sporadic (non-hereditary) ovarian, fallopian tube, or primary peritoneal cancer, without family history of breast or ovarian cancer, were compared to similar cases, matched by age, stage, year of diagnosis, and vital status at time interval to BRCA testing. When possible, 2 sporadic patients were matched to each BRCA patient. An additional group of unmatched, sporadic ovarian, fallopian tube and primary peritoneal cancer patients was included for a separate analysis. Progression-free (PFS) & overall survival (OS) were calculated by the Kaplan-Meier method. Multivariate Cox proportional hazards models were calculated for variables of interest. Matched pairs were treated as clusters. Stratified log rank test was used to calculate survival data for matched pairs using paired event times. Fisher's exact test, chi-square, and univariate logistic regression were also used for analysis. ^ Results. Forty five advanced-stage ovarian, fallopian tube and primary peritoneal cancer patients with non-Ashkenazi Jewish (non-AJ) BRCA mutations, 86 sporadic-matched and 414 sporadic-unmatched patients were analyzed. Compared to the sporadic-matched and sporadic-unmatched ovarian cancer patients, non-AJ BRCA mutation carriers had longer PFS (17.9 & 13.8 mos. vs. 32.0 mos., HR 1.76 [95% CI 1.13–2.75] & 2.61 [95% CI 1.70–4.00]). In relation to the sporadic- unmatched patients, non-AJ BRCA patients had greater odds of complete response to initial chemotherapy (OR 2.25 [95% CI 1.17–5.41]) and improved OS (37.6 mos. vs. 101.4 mos., HR 2.64 [95% CI 1.49–4.67]). ^ Conclusions. This study demonstrates a significant survival advantage in advanced-stage ovarian cancer patients with non-AJ BRCA mutations, confirming the previous studies in the Jewish population. Our efforts to account for "survival bias," by matching, will continue with collaborative studies. ^
Resumo:
Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^
Resumo:
In prospective studies it is essential that the study sample accurately represents the target population for meaningful inferences to be drawn. Understanding why some individuals do not participate, or fail to continue to participate, in longitudinal studies can provide an empirical basis for the development of effective recruitment and retention strategies to improve response rates. This study examined the influence of social connectedness and self-esteem on long-term retention of participants, using secondary data from the “San Antonio Longitudinal Study of Aging” (SALSA), a population-based study of Mexican Americans (MAs) and European Americans (EAs) aged over 65 years residing in San Antonio, Texas. We tested the effect of social connectedness, self-esteem and socioeconomic status on participant retention in both ethnic groups. In MAs only, we analyzed whether acculturation and assimilation moderated these associations and/or had a direct effect on participant retention. ^ Low income, low frequency of social contacts and length of recruitment interval were significant predictors of non-completer status. Participants with low levels of social contacts were almost twice as likely as those with high levels of social contacts to be non-completers, even after adjustment for age, sex, ethnic group, education, household income, and recruitment interval (OR = 1.95, 95% CI: 1.26–3.01, p = 0.003). Recruitment interval consistently and strongly predicted non-completer status in all the models tested. Depending on the model, for each year beyond baseline there was a 25–33% greater likelihood of non-completion. The only significant interaction, or moderating, effect observed was between social contacts and cultural values among MAs. Specifically, MAs with both low social contacts and low acculturation on cultural values (i.e., placed high value on preserving Mexican cultural origins) were three and half times more likely to be non-completers compared with MAs in other subgroups comprised of the combination of these variables, even after adjustment for covariates. ^ Long term studies with older and minority participants are challenging for participant retention. Strategies can be designed to enhance retention by paying special attention to participants with low social contacts and, in MAs, participants with both low social contacts and low acculturation on cultural values. Minimizing the time interval between baseline and follow-up recruitment, and maintaining frequent contact with participants during this interval should also be is integral to the study design.^
Resumo:
The determinants of change in blood pressure during childhood and adolescence were studied in a cohort of U.S. national probability sample of 2146 children examined on two occasions during the Health Examination Survey. Significant negative correlations between the initial level and the subsequent changes in blood pressure were observed. The multiple regression analyses showed that the major determinants of systolic blood pressure (SBP) change were change in weight, baseline SBP, and baseline upper arm girth. Race, time interval between examinations, baseline age, and height change were also significant determinants in SBP change. For the change in diastolic blood pressure (DBP), baseline DBP, baseline weight, and weight change were the major determinants. Baseline SBP, time interval and race were also significant determinants. Sexual maturation variables were also considered in the subgroup analysis for girls. Weight change was the most important predictor of the change in SBP for the group of girls who were still in the pre-menarchal or pre-breast maturation status at the time of the follow-up examination, and who had started to menstruate or to develop breast maturation at sometime between the two examinations. Baseline triceps skinfold thickness or initial SBP were more important variables than weight change for the group of girls who had already experienced menarche or breast maturation at the time of the initial survey. For the total group, pubic hair maturation was found to be a significant predictor of SBP change at the 5% significance level. The importance of weight change and baseline weight for the changes in blood pressure warrants further study. ^
Resumo:
CHARACTERIZATION OF THE COUNT RATE PERFORMANCE AND EVALUATION OF THE EFFECTS OF HIGH COUNT RATES ON MODERN GAMMA CAMERAS Michael Stephen Silosky, B.S. Supervisory Professor: S. Cheenu Kappadath, Ph.D. Evaluation of count rate performance (CRP) is an integral component of gamma camera quality assurance and measurement of system dead time (τ) is important for quantitative SPECT. The CRP of three modern gamma cameras was characterized using established methods (Decay and Dual Source) under a variety of experimental conditions. For the Decay method, input count rate was plotted against observed count rate and fit to the paralyzable detector model (PDM) to estimate τ (Rates method). A novel expression for observed counts as a function of measurement time interval was derived and the observed counts were fit to this expression to estimate τ (Counts method). Correlation and Bland-Altman analysis were performed to assess agreement in estimates of τ between methods. The dependencies of τ on energy window definition and incident energy spectrum were characterized. The Dual Source method was also used to estimate τ and its agreement with the Decay method under identical conditions and the effects of total activity and the ratio of source activities were investigated. Additionally, the effects of count rate on several performance metrics were evaluated. The CRP curves for each system agreed with the PDM at low count rates but deviated substantially at high count rates. Estimates of τ for the paralyzable portion of the CRP curves using the Rates and Counts methods were highly correlated (r=0.999) but with a small (~6%) difference. No significant difference was observed between the highly correlated estimates of τ using the Decay or Dual Source methods under identical experimental conditions (r=0.996). Estimates of τ increased as a power-law function with decreasing ratio of counts in the photopeak to the total counts and linearly with decreasing spectral effective energy. Dual Source method estimates of τ varied as a quadratic with the ratio of the single source to combined source activities and linearly with total activity used across a large range. Image uniformity, spatial resolution, and energy resolution degraded linearly with count rate and image distorting effects were observed. Guidelines for CRP testing and a possible method for the correction of count rate losses for clinical images have been proposed.
Resumo:
Background: No studies have attempted to determine whether nodal surgery utilization, time to initiation and completion of chemotherapy or surveillance mammography impact breast cancer survival. ^ Objectives and Methods: To determine whether receipt of nodal surgery, initiation and completion of chemotherapy, and surveillance mammography impact of racial disparities in survival among breast cancer patients in SEER areas, 1992-2005. ^ Results: Adjusting for nodal surgery did not reduce racial disparities in survival. Patients who initiated chemotherapy more than three months after surgery were 1.8 times more likely to die of breast cancer (95% CI 1.3-2.5) compared to those who initiated chemotherapy less than a month after surgery, even after controlling for known confounders or controlling for race. Despite correcting for chemotherapy initiation and completion and known predictors of outcome, African American women still had worse disease specific survival than their Caucasian counterparts. We found that non-whites underwent surveillance mammography less frequently compared with whites and mammography use during a one- or two-year time interval was associated with a small reduced risk of breast-cancer-specific and all-cause mortality. Women who received a mammogram during a two-year interval could expect the same disease-specific survival benefit or overall survival benefit as women who received a mammogram during a one-year interval. We found that while adjustment for surveillance mammography receipt and physician visits reduced differences in mortality between blacks and whites, these survival disparities were eliminated after adjusting for the number of surveillance mammograms received. ^ Conclusions: The disparities in survival among African American and Hispanic women with breast cancer are not explained by nodal surgery utilization or chemotherapy initiation and chemotherapy completion. Surveillance mammograms, physician visits and number of mammograms received may play a major role in achieving equal outcomes for breast cancer-specific mortality for women diagnosed with primary breast cancer. Racial disparities in all-cause mortality were explained by racial differences in surveillance mammograms to certain degree, but were no longer significant after controlling for differences in comorbidity. Focusing on access to quality care and post treatment surveillance might help achieve national goals to eliminate racial disparities in healthcare and outcomes. ^
Resumo:
Of the large clinical trials evaluating screening mammography efficacy, none included women ages 75 and older. Recommendations on an upper age limit at which to discontinue screening are based on indirect evidence and are not consistent. Screening mammography is evaluated using observational data from the SEER-Medicare linked database. Measuring the benefit of screening mammography is difficult due to the impact of lead-time bias, length bias and over-detection. The underlying conceptual model divides the disease into two stages: pre-clinical (T0) and symptomatic (T1) breast cancer. Treating the time in these phases as a pair of dependent bivariate observations, (t0,t1), estimates are derived to describe the distribution of this random vector. To quantify the effect of screening mammography, statistical inference is made about the mammography parameters that correspond to the marginal distribution of the symptomatic phase duration (T1). This shows the hazard ratio of death from breast cancer comparing women with screen-detected tumors to those detected at their symptom onset is 0.36 (0.30, 0.42), indicating a benefit among the screen-detected cases. ^
Resumo:
A general model for the illness-death stochastic process with covariates has been developed for the analysis of survival data. This model incorporates important baseline and time-dependent covariates to make proper adjustment for the transition probabilities and survival probabilities. The follow-up period is subdivided into small intervals and a constant hazard is assumed for each interval. An approximation formula is derived to estimate the transition parameters when the exact transition time is unknown.^ The method developed is illustrated by using data from a study on the prevention of the recurrence of a myocardial infarction and subsequent mortality, the Beta-Blocker Heart Attack Trial (BHAT). This method provides an analytical approach which simultaneously includes provision for both fatal and nonfatal events in the model. According to this analysis, the effectiveness of the treatment can be compared between the Placebo and Propranolol treatment groups with respect to fatal and nonfatal events. ^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
Many statistical studies feature data with both exact-time and interval-censored events. While a number of methods currently exist to handle interval-censored events and multivariate exact-time events separately, few techniques exist to deal with their combination. This thesis develops a theoretical framework for analyzing a multivariate endpoint comprised of a single interval-censored event plus an arbitrary number of exact-time events. The approach fuses the exact-time events, modeled using the marginal method of Wei, Lin, and Weissfeld, with a piecewise-exponential interval-censored component. The resulting model incorporates more of the information in the data and also removes some of the biases associated with the exclusion of interval-censored events. A simulation study demonstrates that our approach produces reliable estimates for the model parameters and their variance-covariance matrix. As a real-world data example, we apply this technique to the Systolic Hypertension in the Elderly Program (SHEP) clinical trial, which features three correlated events: clinical non-fatal myocardial infarction, fatal myocardial infarction (two exact-time events), and silent myocardial infarction (one interval-censored event). ^