29 resultados para Time to surgery


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background. Laparoscopic Cholecystectomy is the gold standard for patients who are diagnosed with biliary colic (NIH, 1993). It has been demonstrated that individuals who wait a longer time between diagnosis and treatment are at increased risk of having complications (Rutledge et al., 2000; Contini et al., 2004; Eldar et al., 1999). County hospitals, such as Ben Taub General Hospital (BTGH), have a particularly high population of uninsured patients and consequently long surgical wait periods due to limited resources. This study evaluates patients the risk factors involved in their progression to complications from gallstones in a county hospital environment. ^ Methods. A case-control study using medical records was performed on all patients who underwent a cholecystectomy for gallstone disease at BTGH during the year of 2005 (n=414). The risk factors included in the study are obesity, gender, age, race, diabetes, and amount of time from diagnosis to surgery. Multivariate analysis and logistical regression were used to assess factors that potentially lead to the development of complications. ^ Results. There were a total of 414 patients at BTGH who underwent a cholecystectomy for gallstone disease during 2005. The majority of patients were female, 84.3% (n=349) and Hispanic, 79.7% (n=330). The median wait time from diagnosis to surgery was 1.43 weeks (range: 0-184.71). The majority of patients presented with complications 72.5% (n=112). The two factors that impacted development of complications in our study population were Hispanic race (OR=1.81; CI 1.02, 3.23; p=0.04) and time from diagnosis to surgery (OR=0.98; CI 0.97, 0.99; p<0.01). Obesity, gender, age, and diabetes were not predictive of development of complications. ^ Conclusions. An individual's socioeconomic status potentially influences all aspects of their health and subsequent health care. The patient population of BTGH is largely uninsured and therefore less likely to seek care at an early stage in their disease process. In order to decrease the rate of complications, there needs to be a system that increases patient access to primary care clinics. Until the problem of access to care is solved, those who are uninsured will likely suffer more severe complications and society will bear the cost. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background. Surgical site infections (SSI) are one of the most common nosocomial infections in the United States. This study was conducted following an increase in the rate of SSI following spinal procedures at the study hospital. ^ Methods. This study examined patient and hospital associated risk factors for SSI using existing data on patients who had spinal surgery performed at the study hospital between December 2003 and August 2005. There were 59 patients with SSI identified as cases; controls were randomly selected from patients who had spinal procedures performed at the study hospital during the study period, but did not develop infection. Of the 245 original records reviewed, 5% were missing more than half the variables and were eliminated from the data set. A total of 234 patients were included in the final analysis, representing 55 cases and 179 controls. Multivariable analysis was conducted using logistic regression to control for confounding variables. ^ Results. Three variables were found to be significant risk factors for SSI in the study population: presence of comorbidities (odds ratio 3.15, 95% confidence interval 1.20 to 8.26), cut time above the population median of 100 minutes (odds ratio 2.98, 95% confidence interval 1.12 to 5.49), and use of iodine only for preoperative skin antisepsis (odds ratio 0.16, 95% confidence interval 0.06 to 0.45). Several risk factors of specific concern to the study hospital, such as operating room, hospital staff involved in the procedures and workers' compensation status, were not shown to be statistically significant. In addition, multiple factors that have been identified in prior studies, such as method of hair removal, smoking status, or incontinence, were not shown to be statistically significant in this population. ^ Conclusions. This study confirms that increased cut time is a risk for post-operative infection. Use of iodine only was found to decrease risk of infection; further study is recommended in a population with higher usage of chlorhexadine gluconate. Presence of comorbidities at the time of surgery was also found to be a risk factor for infection; however, specific comorbidities were not studied. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction. 3-hydroxy-3-methylglutaryl CoA reductase inhibitor ("statin") have been widely used for hypercholesteroremia and Statin induced myopathy is well known. Whether Statins contribute to exacerbation of Myasthenia Gravis (MG) requiring hospitalization is not well known. ^ Objectives. To determine the frequency of statin use in patients with MG seen at the neuromuscular division at University of Alabama in Birmingham (UAB) and to evaluate any association between use of statins and MG exacerbations requiring hospitalization in patients with an established diagnosis of Myasthenia Gravis. ^ Methods. We reviewed records of all current MG patients at the UAB neuromuscular department to obtain details on use of statins and any hospitalizations due to exacerbation of MG over the period from January 1, 2003 to December 31, 2006. ^ Results. Of the 113 MG patients on whom information was available for this period, 40 were on statins during at least one clinic visit. Statin users were more likely to be older (mean age 60.2 vs 53.8, p = 0.029), male (70.0% vs 43.8%, p = 0.008), and had a later onset of myasthenia gravis (mean age in years at onset 49.8 versus 42.9, p = 0.051). The total number of hospitalizations or the proportion of subjects who had at least one hospitalization during the study period did not differ in the statin versus no-statin group. However, when hospitalizations which occurred from a suspected precipitant were excluded ("event"), the proportion of subjects who had at least one such event during the study period was higher in the group using statins. In the final Cox proportional hazard model for cumulative time to event, statin use (OR = 6.44, p <0.01) and baseline immunosuppression (OR = 3.03, p = 0.07) were found to increase the odds of event. ^ Conclusions. Statin use may increase the rate of hospitalizations due to MG exacerbation, when excluding exacerbations precipitated by other suspected factors.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose. A descriptive analysis of glioma patients by race was carried out in order to better elucidate potential differences between races in demographics, treatment, characteristics, prognosis and survival. ^ Patients and Methods. Among 1,967 patients ≥ 18 years diagnosed with glioma seen between July 2000 and September 2006 at The University of Texas M.D. Anderson Cancer Center (UTMDACC). Data were collated from the UTMDACC Patient History Database (PHDB) and the UTMDACC Tumor Registry Database (TRDB). Chi-square analysis, uni- /multivariate Cox proportional hazards modeling and survival analysis were used to analyze differences by race. ^ Results. Demographic, treatment and histologic differences exist between races. Though risk differences were seen between races, race was not found to be a significant predictor in multivariate regression analysis after accounting for age, surgery, chemotherapy, radiation, tumor type as stratified by WHO tumor grade. Age was the most consistent predictor in risk for death. Overall survival by race was significantly different (p=0.0049) only in low-grade gliomas after adjustment for age although survival differences were very slight. ^ Conclusion. Among this cohort of glioma patients, age was the strongest predictor for survival. It is likely that survival is more influenced by age, time to treatment, tumor grade and surgical expertise rather than racial differences. However, age at diagnosis, gender ratios, histology and history of cancer differed significantly between race and genetic differences to this effect cannot be excluded. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Knee osteoarthritis (OA) is the most prevalent form of arthritis in the US, affecting approximately 37% of adults. Approximately 300,000 total knee arthroplasty (TKA) procedures take place in the United States each year. Total knee arthroplasty is an elective procedure available to patients as an irreversible treatment after failure of previous medical treatments. Some patients sacrifice quality of life and endure many years of pain before making the decision to undergo total knee replacement. In making their decision, it is therefore imperative for patients to understand the procedure, risks and surgical outcomes to create realistic expectations and increase outcome satisfaction. ^ From 2004-2007, 236 OA patients who underwent TKA participated in the PEAKS (Patient Expectations About Knee Surgery) study, an observational longitudinal cohort study, completed baseline and 6 month follow-up questionnaires after the surgery. We performed a secondary data analysis of the PEAKS study to: (1) determine the specific presurgical patient characteristics associated with patients’ presurgical expectations of time to functional recovery; and (2) determine the association between presurgical expectations of time to functional recovery and postsurgical patient capabilities (6 months after TKA). We utilized the WOMAC to measure knee pain and function, the SF-36 to measure health-related quality of life, and the DASS and MOS-SSS to measure psychosocial quality of life variables. Expectation and capability measures were generated from panel of experts. A list of 10 activities was used for this analysis to measure functional expectations and postoperative functional capabilities. ^ The final cohort consisted of 236 individuals, was predominately White with 154 women and 82 men. The mean age was 65 years. Patients were optimistic about their time to functional recovery. Expectation time of being able to perform the list activities per patient had a median of less than 3 months. Patients who expected to be able to perform the functional activities by 3 months had better knee function, less pain and better overall health-related quality of life. Despite expectation differences, all patients showed significant improvement 6 months after surgery. Participant expectation of time to functional recovery was not an independent predictor of capability to perform functional activities at 6 months. Better presurgical patient characteristics were, however, associated with a higher likelihood of being able to perform all activities at 6 months. ^ This study gave us initial insight on the relationship between presurgical patient characteristics and their expectations of functional recovery after total knee replacement. Future studies clarifying the relationship between patient presurgical characteristics and postsurgical functional capabilities are needed.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research studies on the association between exposures to air contaminants and disease frequently use worn dosimeters to measure the concentration of the contaminant of interest. But investigation of exposure determinants requires additional knowledge beyond concentration, i.e., knowledge about personal activity such as whether the exposure occurred in a building or outdoors. Current studies frequently depend upon manual activity logging to record location. This study's purpose was to evaluate the use of a worn data logger recording three environmental parameters—temperature, humidity, and light intensity—as well as time of day, to determine indoor or outdoor location, with an ultimate aim of eliminating the need to manually log location or at least providing a method to verify such logs. For this study, data collection was limited to a single geographical area (Houston, Texas metropolitan area) during a single season (winter) using a HOBO H8 four-channel data logger. Data for development of a Location Model were collected using the logger for deliberate sampling of programmed activities in outdoor, building, and vehicle locations at various times of day. The Model was developed by analyzing the distributions of environmental parameters by location and time to establish a prioritized set of cut points for assessing locations. The final Model consisted of four "processors" that varied these priorities and cut points. Data to evaluate the Model were collected by wearing the logger during "typical days" while maintaining a location log. The Model was tested by feeding the typical day data into each processor and generating assessed locations for each record. These assessed locations were then compared with true locations recorded in the manual log to determine accurate versus erroneous assessments. The utility of each processor was evaluated by calculating overall error rates across all times of day, and calculating individual error rates by time of day. Unfortunately, the error rates were large, such that there would be no benefit in using the Model. Another analysis in which assessed locations were classified as either indoor (including both building and vehicle) or outdoor yielded slightly lower error rates that still precluded any benefit of the Model's use.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Federal Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid (CMS) play key roles in making Class III, medical devices available to the public, and they are required by law to meet statutory deadlines for applications under review. Historically, both agencies have failed to meet their respective statutory requirements. Since these failures affect patient access and may adversely impact public health, Congress has enacted several “modernization” laws. However, the effectiveness of these modernization laws has not been adequately studied or established for Class III medical devices. ^ The aim of this research study was, therefore, to analyze how these modernization laws may have affected public access to medical devices. Two questions were addressed: (1) How have the FDA modernization laws affected the time to approval for medical device premarket approval applications (PMAs)? (2) How has the CMS modernization law affected the time to approval for national coverage decisions (NCDs)? The data for this research study were collected from publicly available databases for the period January 1, 1995, through December 31, 2008. These dates were selected to ensure that a sufficient period of time was captured to measure pre- and post-modernization effects on time to approval. All records containing original PMAs were obtained from the FDA database, and all records containing NCDs were obtained from the CMS database. Source documents, including FDA premarket approval letters and CMS national coverage decision memoranda, were reviewed to obtain additional data not found in the search results. Analyses were conducted to determine the effects of the pre- and post-modernization laws on time to approval. Secondary analyses of FDA subcategories were conducted to uncover any causal factors that might explain differences in time to approval and to compare with the primary trends. The primary analysis showed that the FDA modernization laws of 1997 and 2002 initially reduced PMA time to approval; after the 2002 modernization law, the time to approval began increasing and continued to increase through December 2008. The non-combined, subcategory approval trends were similar to the primary analysis trends. The combined, subcategory analysis showed no clear trends with the exception of non-implantable devices, for which time to approval trended down after 1997. The CMS modernization law of 2003 reduced NCD time to approval, a trend that continued through December 2008. This study also showed that approximately 86% of PMA devices do not receive NCDs. ^ As a result of this research study, recommendations are offered to help resolve statutory non-compliance and access issues, as follows: (1) Authorities should examine underlying causal factors for the observed trends; (2) Process improvements should be made to better coordinate FDA and CMS activities to include sharing data, reducing duplication, and establishing clear criteria for “safe and effective” and “reasonable and necessary”; (3) A common identifier should be established to allow tracking and trending of applications between FDA and CMS databases; (4) Statutory requirements may need to be revised; and (5) An investigation should be undertaken to determine why NCDs are not issued for the majority of PMAs. Any process improvements should be made without creating additional safety risks and adversely impacting public health. Finally, additional studies are needed to fully characterize and better understand the trends identified in this research study.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Racial differences in heart failure with preserved ejection fraction (HFpEF) have rarely been studied in an ambulatory, financially "equal access" cohort, although the majority of such patients are treated as outpatients. ^ Retrospective data was collected from 2,526 patients (2,240 Whites, 286 African American) with HFpEF treated at 153 VA clinics, as part of the VA External Peer Review Program (EPRP) between October 2000 and September 2002. Kaplan Meier curves (stratified by race) were created for time to first heart failure (HF) hospitalization, all cause hospitalization and death and Cox proportional multivariate regression models were constructed to evaluate the effect of race on these outcomes. ^ African American patients were younger (67.7 ± 11.3 vs. 71.2 ± 9.8 years; p < 0.001), had lower prevalence of atrial fibrillation (24.5 % vs. 37%; p <0.001), chronic obstructive pulmonary disease (23.4 % vs. 36.9%, p <0.001), but had higher blood pressure (systolic blood pressure > 120 mm Hg 77.6% vs. 67.8%; p < 0.01), glomerular filtration rate (67.9 ± 31.0 vs. 61.6 ± 22.6 mL/min/1.73 m2; p < 0.001), anemia (56.6% vs. 41.7%; p <0.001) as compared to whites. African Americans were found to have higher risk adjusted rate of HF hospitalization (HR 1.52, 95% CI 1.1 - 2.11; p = 0.01), with no difference in risk-adjusted all cause hospitalization (p = 0.80) and death (p= 0.21). ^ In a financially "equal access" setting of the VA, among ambulatory patients with HFpEF, African Americans have similar rates of mortality and all cause hospitalization but have an increased risk of HF hospitalizations compared to whites.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The natural history of placebo treated travelers' diarrhea and the prognostic factors of recovery from diarrhea were evaluated using 9 groups of placebo treated subjects from 9 clinical trial studies conducted since 1975, for use as a historical control in the future clinical trial of antidiarrheal agents. All of these studies were done by the same group of investigators in one site (Guadalajara, Mexico). The studies are similar in terms of population, measured parameters, microbiologic identification of enteropathogens and definitions of parameters. The studies had two different durations of followup. In some studies, subjects were followed for two days, and in some they were followed for five days.^ Using definitions established by the Infectious Diseases society of America and the Food and Drug Administration, the following efficacy parameters were evaluated: Time to last unformed stool (TLUS), number of unformed stools post-initiation of placebo treatment for five consecutive days of followup, microbiologic cure, and improvement of diarrhea. Among the groups that were followed for five days, the mean TLUS ranged from 59.1 to 83.5 hours. Fifty percent to 78% had diarrhea lasting more than 48 hours and 25% had diarrhea more than five days. The mean number of unformed stools passed on the first day post-initiation of therapy ranged from 3.6 to 5.8 and, for the fifth day ranged from 0.5 to 1.5. By the end of followup, diarrhea improved in 82.6% to 90% of the subjects. Subjects with enterotoxigenic E. coli had 21.6% to 90.0% microbiologic cure; and subjects with shigella species experienced 14.3% to 60.0% microbiologic cure.^ In evaluating the prognostic factors of recovery from diarrhea (primary efficacy parameter in evaluating the efficacy of antidiarrheal agents against travelers' diarrhea). The subjects from five studies were pooled and the Cox proportional hazard model was used to evaluate the predictors of prolonged diarrhea. After adjusting for design characteristics of each trial, fever with a rate ratio (RR) of 0.40, presence of invasive pathogens with a RR of 0.41, presence of severe abdominal pain and cramps with a RR of 0.50, number of watery stools more than five with a RR of 0.60, and presence of non-invasive pathogens with a RR of 0.84 predicted a longer duration of diarrhea. Severe vomiting with a RR of 2.53 predicted a shorter duration of diarrhea. The number of soft stools, presence of fecal leukocytes, presence of nausea, and duration of diarrhea before enrollment were not associated with duration of diarrhea. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: No studies have attempted to determine whether nodal surgery utilization, time to initiation and completion of chemotherapy or surveillance mammography impact breast cancer survival. ^ Objectives and Methods: To determine whether receipt of nodal surgery, initiation and completion of chemotherapy, and surveillance mammography impact of racial disparities in survival among breast cancer patients in SEER areas, 1992-2005. ^ Results: Adjusting for nodal surgery did not reduce racial disparities in survival. Patients who initiated chemotherapy more than three months after surgery were 1.8 times more likely to die of breast cancer (95% CI 1.3-2.5) compared to those who initiated chemotherapy less than a month after surgery, even after controlling for known confounders or controlling for race. Despite correcting for chemotherapy initiation and completion and known predictors of outcome, African American women still had worse disease specific survival than their Caucasian counterparts. We found that non-whites underwent surveillance mammography less frequently compared with whites and mammography use during a one- or two-year time interval was associated with a small reduced risk of breast-cancer-specific and all-cause mortality. Women who received a mammogram during a two-year interval could expect the same disease-specific survival benefit or overall survival benefit as women who received a mammogram during a one-year interval. We found that while adjustment for surveillance mammography receipt and physician visits reduced differences in mortality between blacks and whites, these survival disparities were eliminated after adjusting for the number of surveillance mammograms received. ^ Conclusions: The disparities in survival among African American and Hispanic women with breast cancer are not explained by nodal surgery utilization or chemotherapy initiation and chemotherapy completion. Surveillance mammograms, physician visits and number of mammograms received may play a major role in achieving equal outcomes for breast cancer-specific mortality for women diagnosed with primary breast cancer. Racial disparities in all-cause mortality were explained by racial differences in surveillance mammograms to certain degree, but were no longer significant after controlling for differences in comorbidity. Focusing on access to quality care and post treatment surveillance might help achieve national goals to eliminate racial disparities in healthcare and outcomes. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Prevalent sampling is an efficient and focused approach to the study of the natural history of disease. Right-censored time-to-event data observed from prospective prevalent cohort studies are often subject to left-truncated sampling. Left-truncated samples are not randomly selected from the population of interest and have a selection bias. Extensive studies have focused on estimating the unbiased distribution given left-truncated samples. However, in many applications, the exact date of disease onset was not observed. For example, in an HIV infection study, the exact HIV infection time is not observable. However, it is known that the HIV infection date occurred between two observable dates. Meeting these challenges motivated our study. We propose parametric models to estimate the unbiased distribution of left-truncated, right-censored time-to-event data with uncertain onset times. We first consider data from a length-biased sampling, a specific case in left-truncated samplings. Then we extend the proposed method to general left-truncated sampling. With a parametric model, we construct the full likelihood, given a biased sample with unobservable onset of disease. The parameters are estimated through the maximization of the constructed likelihood by adjusting the selection bias and unobservable exact onset. Simulations are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to an HIV infection study, estimating the unbiased survival function and covariance coefficients. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: The primary objective of our study was to study the effect of metformin in patients of metastatic renal cell cancer (mRCC) and diabetes who are on treatment with frontline therapy of tyrosine kinase inhibitors. The effect of therapy was described in terms of overall survival and progression free survival. Comparisons were made between group of patients receiving metformin versus group of patients receiving insulin in diabetic patients of metastatic renal cancer on frontline therapy. Exploratory analyses were also done comparing non-diabetic patients of metastatic renal cell cancer receiving frontline therapy compared to diabetic patients of metastatic renal cell cancer receiving metformin therapy. ^ Methods: The study design is a retrospective case series to elaborate the response rate of frontline therapy in combination with metformin for mRCC patients with type 2 diabetes mellitus. The cohort was selected from a database, which was generated for assessing the effect of tyrosine kinase inhibitor therapy associated hypertension in metastatic renal cell cancer at MD Anderson Cancer Center. Patients who had been started on frontline therapy for metastatic renal cell carcinoma from all ethnic and racial backgrounds were selected for the study. The exclusion criteria would be of patients who took frontline therapy for less than 3 months or were lost to follow-up. Our exposure variable was treatment with metformin, which comprised of patients who took metformin for the treatment of type 2 diabetes at any time of diagnosis of metastatic renal cell carcinoma. The outcomes assessed were last available follow-up or date of death for the overall survival and date of progression of disease from their radiological reports for time to progression. The response rates were compared by covariates that are known to be strongly associated with renal cell cancer. ^ Results: For our primary analyses between the insulin and metformin group, there were 82 patients, out of which 50 took insulin therapy and 32 took metformin therapy for type 2 diabetes. For our exploratory analysis, we compared 32 diabetic patients on metformin to 146 non-diabetic patients, not on metformin. Baseline characteristics were compared among the population. The time from the start of treatment until the date of progression of renal cell cancer and date of death or last follow-up were estimated for survival analysis. ^ In our primary analyses, there was a significant difference in the time to progression of patients receiving metformin therapy vs insulin therapy, which was also seen in our exploratory analyses. The median time to progression in primary analyses was 1259 days (95% CI: 659-1832 days) in patients on metformin therapy compared to 540 days (95% CI: 350-894) in patients who were receiving insulin therapy (p=0.024). The median time to progression in exploratory analyses was 1259 days (95% CI: 659-1832 days) in patients on metformin therapy compared to 279 days (95% CI: 202-372 days) in non-diabetic group (p-value <0.0001). ^ The median overall survival was 1004 days in metformin group (95% CI: 761-1212 days) compared to 816 days (95%CI: 558-1405 days) in insulin group (p-value<0.91). For the exploratory analyses, the median overall survival was 1004 days in metformin group (95% CI: 761-1212 days) compared to 766 days (95%CI: 649-965 days) in the non-diabetic group (p-value<0.78). Metformin was observed to increase the progression free survival in both the primary and exploratory analyses (HR=0.52 in metformin Vs insulin group and HR=0.36 in metformin Vs non-diabetic group, respectively). ^ Conclusion: In laboratory studies and a few clinical studies metformin has been proven to have dual benefits in patients suffering from cancer and type 2-diabetes via its action on the mammalian target of Rapamycin pathway and effect in decreasing blood sugar by increasing the sensitivity of the insulin receptors to insulin. Several studies in breast cancer patients have documented a beneficial effect (quantified by pathological remission of cancer) of metformin use in patients taking treatment for breast cancer therapy. Combination of metformin therapy in patients taking frontline therapy for renal cell cancer may provide a significant benefit in prolonging the overall survival in patients with metastatic renal cell cancer and diabetes. ^