16 resultados para Time-to-collision
em DigitalCommons@The Texas Medical Center
Resumo:
BACKGROUND: Renal involvement is a serious manifestation of systemic lupus erythematosus (SLE); it may portend a poor prognosis as it may lead to end-stage renal disease (ESRD). The purpose of this study was to determine the factors predicting the development of renal involvement and its progression to ESRD in a multi-ethnic SLE cohort (PROFILE). METHODS AND FINDINGS: PROFILE includes SLE patients from five different United States institutions. We examined at baseline the socioeconomic-demographic, clinical, and genetic variables associated with the development of renal involvement and its progression to ESRD by univariable and multivariable Cox proportional hazards regression analyses. Analyses of onset of renal involvement included only patients with renal involvement after SLE diagnosis (n = 229). Analyses of ESRD included all patients, regardless of whether renal involvement occurred before, at, or after SLE diagnosis (34 of 438 patients). In addition, we performed a multivariable logistic regression analysis of the variables associated with the development of renal involvement at any time during the course of SLE.In the time-dependent multivariable analysis, patients developing renal involvement were more likely to have more American College of Rheumatology criteria for SLE, and to be younger, hypertensive, and of African-American or Hispanic (from Texas) ethnicity. Alternative regression models were consistent with these results. In addition to greater accrued disease damage (renal damage excluded), younger age, and Hispanic ethnicity (from Texas), homozygosity for the valine allele of FcgammaRIIIa (FCGR3A*GG) was a significant predictor of ESRD. Results from the multivariable logistic regression model that included all cases of renal involvement were consistent with those from the Cox model. CONCLUSIONS: Fcgamma receptor genotype is a risk factor for progression of renal disease to ESRD. Since the frequency distribution of FCGR3A alleles does not vary significantly among the ethnic groups studied, the additional factors underlying the ethnic disparities in renal disease progression remain to be elucidated.
Resumo:
Issue editor introduction to Volume 2, Issue 2 of the Journal of Applied Research on Children.
Resumo:
The purpose of this research was two-fold; to investigate the effect of institutionalization on death and CD4 decline in a cohort of 325 HIV-infected Romanian children, and to investigate the effect of disclosure of the child's own HIV status in this cohort. All children were treated with Kaletra-based highly active antiretroviral therapy, and were followed from November, 2001 through October, 2004. The mean age of the children included in the cohort is 13. The study found that children in biological families were more likely to experience disease progression through either death or CD4 decline than children in institutions (p=0.04). The family home-style institution may prove to be a replicable model for the safe and appropriate care of HIV-infected orphaned and abandoned children and teens. The study also found that children who do not know their own HIV infection status were more likely to experience disease progression through either death or CD4 decline than children who know their HIV diagnosis (p=0.03). This evidence suggests that, in the context of highly active anti retroviral therapy, knowledge of one's own HIV infection status is associated with delayed HIV disease progression. ^
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
Introduction Online courses provide flexible access to education from a distance. However, learners may encounter frustration and disappointment in the learning process for various reasons. Faculties might not be familiar with adult learning principles. The online course developer may have no knowledge, experience, or the skills necessary for developing online courseware. Online course development can take longer time and more resources. It can also take longer time to deliver the course. It is, therefore, important that online course development be made efficient and effective for best student learning. [See PDF for complete abstract]
Resumo:
Tuberculosis remains a major threat as drug resistance continues to increase. Pulmonary tuberculosis in adults is responsible for 80% of clinical cases and nearly 100% of transmission of infection. Unfortunately, since we have no animal models of adult type pulmonary tuberculosis, the most important type of disease remains largely out of reach of modern science and many fundamental questions remain unanswered. This paper reviews research dating back to the 1950's providing compelling evidence that cord factor (trehalose 6,6 dimycolate [TDM]) is essential for understanding tuberculosis. However, the original papers by Bloch and Noll were too far ahead of their time to have immediate impact. We can now recognize that the physical and biologic properties of cord factor are unprecedented in science, especially its ability to switch between two sets of biologic activities with changes in conformation. While TDM remains on organisms, it protects them from killing within macrophages, reduces antibiotic effectiveness and inhibits the stimulation of protective immune responses. If it comes off organisms and associates with lipid, TDM becomes a driver of tissue damage and necrosis. Studies emanating from cord factor research have produced (1) a rationale for improving vaccines, (2) an approach to new drugs that overcome natural resistance to antibiotics, (3) models of caseating granulomas that reproduce multiple manifestations of human tuberculosis. (4) evidence that TDM is a key T cell antigen in destructive lesions of tuberculosis, and (5) a new understanding of the pathology and pathogenesis of postprimary tuberculosis that can guide more informative studies of long standing mysteries of tuberculosis.
Resumo:
Although the literature has provided many critiques of research done on family preservation programs, these critiques have usually been limited to the studies ' assumptions, approach, or methodology. Because of the nature of these critiques, suggestions for future research in this field of practice have been scattered throughout the literature and have not benefited from a wider historical perspective. This paper examines the historical evolution of family preservation studies in child welfare and suggests future directions for research in the field. Among the suggestions the authors posit are (1) research questions should be framed by what we know about improvements in the lives of families and children served by family preservation programs; (2) future explorations should include areas that have received relatively little attention in current research, including the impact of organizational conditions on service fidelity and worker performance; (3) newer treatment models, particularly those that provide both intensive services during a crisis period and less intensive services for maintenance, should be tested; (4) data collection points in longitudinal studies should be guided by theory, and measures should change over time to reflect the theoretically expected changes in families; (5) complex measures of placement prevention and other measures that capture changes in family functioning, child well-being, and child safety, should be utilized to obtain a full picture of program effects; and (6) multiple informants should be used to provide data about program effectiveness. In addition, the authors will argue that the field should carefully consider the amount of change that should be expected from the service models delivered.
Resumo:
The nutritional problems of food insecurity and obesity co-exist among low-income children. As the reauthorization of SNAP approaches in 2012, it is time to consider the dietary intake of food insecure children and how the SNAP program can assist with improving the nutrition of low-income children, in addition to contributing to reducing the prevalence rates of childhood obesity among food insecure households with children.
Resumo:
Secondary acute myeloid leukemia (AML) and myelodysplastic syndrome (MDS) have been recognized as one of the most feared long-term complications of cancer therapy. The aim of this case-control study was to determine the prevalence of chromosomal abnormalities and family history of cancer among secondary AML/MDS cases and de novo AML/MDS controls. Study population were 332 MD Anderson Cancer Center patients who were registered between 1986 and 1994. Cases were patients who had a prior invasive cancer before diagnoses of AML/MDS and controls were de novo AML/MDS. Cases (166) and controls (166) were frequency matched on age $\pm$5 years, sex and year of diagnosis of leukemia. Cytogenetic data were obtained from the leukemia clinic database of MD Anderson Cancer Center and data on family history of cancer and other risk factors were abstracted from the patients' medical record. The distribution of AML and MDS among cases was 58% and 42% respectively and among controls 67% and 33% respectively. Prevalence of chromosomal abnormalities were observed more frequently among cases than controls. Reporting of family history of cancer were similar among both groups. Univariate analysis revealed an odds ratio (OR) of 2.8 (95% CI 1.5-5.4) for deletion of chromosome 7, 1.9 (95% CI 0.9-3.8) for deletion of chromosome 5, 2.3 (95% CI 0.8-6.2) for deletion of 5q, 2.0 (95% CI 1.0-4.2) for trisomy 8, 1.3 (95% CI 0.8-2.1) for chromosomal abnormalities other than chromosome 5 or 7 and 1.3 (95% CI 0.8-2.0) for family history of cancer in a first degree relative. The OR remained significant for deletion of chromosome 7 (2.3, 95% CI 1.1-4.8) after adjustment for age, alcohol, smoking, occupation related to chemical exposure and family history of cancer in a first degree relative. Of the 166 secondary AML/MDS patients 70% had a prior solid tumor and 30% experienced hematological cancers. The most frequent cancers were breast (21.1%), non-Hodgkin lymphoma (13.3%), Hodgkin's disease (10.2%), prostate (7.2%), colon (6%), multiple myeloma (3.6%) and testes (3.0%). The majority of these cancer patients were treated with chemotherapy or radiotherapy or both. Abnormalities of chromosome 5 or 7 were found to be more frequent in secondary AML/MDS patients with prior hematological cancer than patients with prior solid tumors. Median time to develop secondary AML/MDS was 5 years. However, secondary AML/MDS among patients who received chemotherapy and had a family history of cancer in a first degree relative occurred earlier (median 2.25 $\pm$ 0.9 years) than among patients without such family history (median 5.50 $\pm$ 0.18 years) (p $<$.03). The implication of exposure to chemotherapy among patients with a family history of cancer needs to be further investigated. ^
Resumo:
The desire to promote efficient allocation of health resources and effective patient care has focused attention on home care as an alternative to acute hospital service. in particular, clinical home care is suggested as a substitute for the final days of hospital stay. This dissertation evaluates the relationship between hospital and home care services for residents of British Columbia, Canada beginning in 1993/94 using data from the British Columbia Linked Health database. ^ Lengths of stay for patients referred to home care following hospital discharge are compared to those for patients not referred to home care. Ordinary least squares regression analysis adjusts for age, gender, admission severity, comorbidity, complications, income, and other patient, physician, and hospital characteristics. Home care clients tend to have longer stays in hospital than patients not referred to home care (β = 2.54, p = 0.0001). Longer hospital stays are evident for all home care client groups as well as both older and younger patients. Sensitivity analysis for referral time to direct care and extreme lengths of stay are consistent with these findings. Two stage regression analysis indicates that selection bias is not significant.^ Patients referred to clinical home care also have different health service utilization following discharge compared to patients not referred to home care. Home care nursing clients use more medical services to complement home care. Rehabilitation clients initially substitute home care for physiotherapy services but later are more likely to be admitted to residential care. All home care clients are more likely to be readmitted to hospital during the one year follow-up period. There is also a strong complementary association between direct care referral and homemaker support. Rehabilitation clients have a greater risk of dying during the year following discharge. ^ These results suggest that home care is currently used as a complement rather than a substitute for some acute health services. Organizational and resource issues may contribute to the longer stays by home care clients. Program planning and policies are required if home care is to provide an effective substitute for acute hospital days. ^
Resumo:
Various airborne aldehydes and ketones (i.e., airborne carbonyls) present in outdoor, indoor, and personal air pose a risk to human health at present environmental concentrations. To date, there is no adequate, simple-to-use sampler for monitoring carbonyls at parts per billion concentrations in personal air. The Passive Aldehydes and Ketones Sampler (PAKS) originally developed for this purpose has been found to be unreliable in a number of relatively recent field studies. The PAKS method uses dansylhydrazine, DNSH, as the derivatization agent to produce aldehyde derivatives that are analyzed by HPLC with fluorescence detection. The reasons for the poor performance of the PAKS are not known but it is hypothesized that the chemical derivatization conditions and reaction kinetics combined with a relatively low sampling rate may play a role. This study evaluated the effect of absorption and emission wavelengths, pH of the DNSH coating solution, extraction solvent, and time post-extraction for the yield and stability of formaldehyde, acetaldehyde, and acrolein DNSH derivatives. The results suggest that the optimum conditions for the analysis of DNSHydrazones are the following. The excitation and emission wavelengths for HPLC analysis should be at 250nm and 500nm, respectively. The optimal pH of the coating solution appears to be pH 2 because it improves the formation of di-derivatized acrolein DNSHydrazones without affecting the response of the derivatives of the formaldehyde and acetaldehyde derivatives. Acetonitrile is the preferable extraction solvent while the optimal time to analyze the aldehyde derivatives is 72 hours post-extraction. ^
Resumo:
Research studies on the association between exposures to air contaminants and disease frequently use worn dosimeters to measure the concentration of the contaminant of interest. But investigation of exposure determinants requires additional knowledge beyond concentration, i.e., knowledge about personal activity such as whether the exposure occurred in a building or outdoors. Current studies frequently depend upon manual activity logging to record location. This study's purpose was to evaluate the use of a worn data logger recording three environmental parameters—temperature, humidity, and light intensity—as well as time of day, to determine indoor or outdoor location, with an ultimate aim of eliminating the need to manually log location or at least providing a method to verify such logs. For this study, data collection was limited to a single geographical area (Houston, Texas metropolitan area) during a single season (winter) using a HOBO H8 four-channel data logger. Data for development of a Location Model were collected using the logger for deliberate sampling of programmed activities in outdoor, building, and vehicle locations at various times of day. The Model was developed by analyzing the distributions of environmental parameters by location and time to establish a prioritized set of cut points for assessing locations. The final Model consisted of four "processors" that varied these priorities and cut points. Data to evaluate the Model were collected by wearing the logger during "typical days" while maintaining a location log. The Model was tested by feeding the typical day data into each processor and generating assessed locations for each record. These assessed locations were then compared with true locations recorded in the manual log to determine accurate versus erroneous assessments. The utility of each processor was evaluated by calculating overall error rates across all times of day, and calculating individual error rates by time of day. Unfortunately, the error rates were large, such that there would be no benefit in using the Model. Another analysis in which assessed locations were classified as either indoor (including both building and vehicle) or outdoor yielded slightly lower error rates that still precluded any benefit of the Model's use.^
Resumo:
The Federal Food and Drug Administration (FDA) and the Centers for Medicare and Medicaid (CMS) play key roles in making Class III, medical devices available to the public, and they are required by law to meet statutory deadlines for applications under review. Historically, both agencies have failed to meet their respective statutory requirements. Since these failures affect patient access and may adversely impact public health, Congress has enacted several “modernization” laws. However, the effectiveness of these modernization laws has not been adequately studied or established for Class III medical devices. ^ The aim of this research study was, therefore, to analyze how these modernization laws may have affected public access to medical devices. Two questions were addressed: (1) How have the FDA modernization laws affected the time to approval for medical device premarket approval applications (PMAs)? (2) How has the CMS modernization law affected the time to approval for national coverage decisions (NCDs)? The data for this research study were collected from publicly available databases for the period January 1, 1995, through December 31, 2008. These dates were selected to ensure that a sufficient period of time was captured to measure pre- and post-modernization effects on time to approval. All records containing original PMAs were obtained from the FDA database, and all records containing NCDs were obtained from the CMS database. Source documents, including FDA premarket approval letters and CMS national coverage decision memoranda, were reviewed to obtain additional data not found in the search results. Analyses were conducted to determine the effects of the pre- and post-modernization laws on time to approval. Secondary analyses of FDA subcategories were conducted to uncover any causal factors that might explain differences in time to approval and to compare with the primary trends. The primary analysis showed that the FDA modernization laws of 1997 and 2002 initially reduced PMA time to approval; after the 2002 modernization law, the time to approval began increasing and continued to increase through December 2008. The non-combined, subcategory approval trends were similar to the primary analysis trends. The combined, subcategory analysis showed no clear trends with the exception of non-implantable devices, for which time to approval trended down after 1997. The CMS modernization law of 2003 reduced NCD time to approval, a trend that continued through December 2008. This study also showed that approximately 86% of PMA devices do not receive NCDs. ^ As a result of this research study, recommendations are offered to help resolve statutory non-compliance and access issues, as follows: (1) Authorities should examine underlying causal factors for the observed trends; (2) Process improvements should be made to better coordinate FDA and CMS activities to include sharing data, reducing duplication, and establishing clear criteria for “safe and effective” and “reasonable and necessary”; (3) A common identifier should be established to allow tracking and trending of applications between FDA and CMS databases; (4) Statutory requirements may need to be revised; and (5) An investigation should be undertaken to determine why NCDs are not issued for the majority of PMAs. Any process improvements should be made without creating additional safety risks and adversely impacting public health. Finally, additional studies are needed to fully characterize and better understand the trends identified in this research study.^
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
Breast cancer is the most common non-skin cancer and the second leading cause of cancer-related death in women in the United States. Studies on ipsilateral breast tumor relapse (IBTR) status and disease-specific survival will help guide clinic treatment and predict patient prognosis.^ After breast conservation therapy, patients with breast cancer may experience breast tumor relapse. This relapse is classified into two distinct types: true local recurrence (TR) and new ipsilateral primary tumor (NP). However, the methods used to classify the relapse types are imperfect and are prone to misclassification. In addition, some observed survival data (e.g., time to relapse and time from relapse to death)are strongly correlated with relapse types. The first part of this dissertation presents a Bayesian approach to (1) modeling the potentially misclassified relapse status and the correlated survival information, (2) estimating the sensitivity and specificity of the diagnostic methods, and (3) quantify the covariate effects on event probabilities. A shared frailty was used to account for the within-subject correlation between survival times. The inference was conducted using a Bayesian framework via Markov Chain Monte Carlo simulation implemented in softwareWinBUGS. Simulation was used to validate the Bayesian method and assess its frequentist properties. The new model has two important innovations: (1) it utilizes the additional survival times correlated with the relapse status to improve the parameter estimation, and (2) it provides tools to address the correlation between the two diagnostic methods conditional to the true relapse types.^ Prediction of patients at highest risk for IBTR after local excision of ductal carcinoma in situ (DCIS) remains a clinical concern. The goals of the second part of this dissertation were to evaluate a published nomogram from Memorial Sloan-Kettering Cancer Center, to determine the risk of IBTR in patients with DCIS treated with local excision, and to determine whether there is a subset of patients at low risk of IBTR. Patients who had undergone local excision from 1990 through 2007 at MD Anderson Cancer Center with a final diagnosis of DCIS (n=794) were included in this part. Clinicopathologic factors and the performance of the Memorial Sloan-Kettering Cancer Center nomogram for prediction of IBTR were assessed for 734 patients with complete data. Nomogram for prediction of 5- and 10-year IBTR probabilities were found to demonstrate imperfect calibration and discrimination, with an area under the receiver operating characteristic curve of .63 and a concordance index of .63. In conclusion, predictive models for IBTR in DCIS patients treated with local excision are imperfect. Our current ability to accurately predict recurrence based on clinical parameters is limited.^ The American Joint Committee on Cancer (AJCC) staging of breast cancer is widely used to determine prognosis, yet survival within each AJCC stage shows wide variation and remains unpredictable. For the third part of this dissertation, biologic markers were hypothesized to be responsible for some of this variation, and the addition of biologic markers to current AJCC staging were examined for possibly provide improved prognostication. The initial cohort included patients treated with surgery as first intervention at MDACC from 1997 to 2006. Cox proportional hazards models were used to create prognostic scoring systems. AJCC pathologic staging parameters and biologic tumor markers were investigated to devise the scoring systems. Surveillance Epidemiology and End Results (SEER) data was used as the external cohort to validate the scoring systems. Binary indicators for pathologic stage (PS), estrogen receptor status (E), and tumor grade (G) were summed to create PS+EG scoring systems devised to predict 5-year patient outcomes. These scoring systems facilitated separation of the study population into more refined subgroups than the current AJCC staging system. The ability of the PS+EG score to stratify outcomes was confirmed in both internal and external validation cohorts. The current study proposes and validates a new staging system by incorporating tumor grade and ER status into current AJCC staging. We recommend that biologic markers be incorporating into revised versions of the AJCC staging system for patients receiving surgery as the first intervention.^ Chapter 1 focuses on developing a Bayesian method to solve misclassified relapse status and application to breast cancer data. Chapter 2 focuses on evaluation of a breast cancer nomogram for predicting risk of IBTR in patients with DCIS after local excision gives the statement of the problem in the clinical research. Chapter 3 focuses on validation of a novel staging system for disease-specific survival in patients with breast cancer treated with surgery as the first intervention. ^