858 resultados para Early Diagnosis
Resumo:
BackgroundNiemann-Pick disease type C (NP-C) is a rare autosomal recessive disorder of lysosomal cholesterol transport. The objective of this retrospective cohort study was to critically analyze the onset and time course of symptoms, and the clinical diagnostic work-up in the Swiss NP-C cohort.MethodsClinical, biochemical and genetic data were assessed for 14 patients derived from 9 families diagnosed with NP-C between 1994 and 2013. We retrospectively evaluated diagnostic delays and period prevalence rates for neurological, psychiatric and visceral symptoms associated with NP-C disease. The NP-C suspicion index was calculated for the time of neurological disease onset and the time of diagnosis.ResultsThe shortest median diagnostic delay was noted for vertical supranuclear gaze palsy (2y). Ataxia, dysarthria, dysphagia, spasticity, cataplexy, seizures and cognitive decline displayed similar median diagnostic delays (4¿5y). The longest median diagnostic delay was associated with hepatosplenomegaly (15y). Highest period prevalence rates were noted for ataxia, dysarthria, vertical supranuclear gaze palsy and cognitive decline. The NP-C suspicion index revealed a median score of 81 points in nine patients at the time of neurological disease onset which is highly suspicious for NP-C disease. At the time of diagnosis, the score increased to 206 points.ConclusionA neurologic-psychiatric disease pattern represents the most characteristic clinical manifestation of NP-C and occurs early in the disease course. Visceral manifestation such as isolated hepatosplenomegaly often fails recognition and thus highlights the importance of a work-up for lysosomal storage disorders. The NP-C suspicion index emphasizes the importance of a multisystem evaluation, but seems to be weak in monosymptomatic and infantile NP-C patients.
Resumo:
UNLABELLED Early assessment of response at 3 months of tyrosine kinase inhibitor treatment has become an important tool to predict favorable outcome. We sought to investigate the impact of relative changes of BCR-ABL transcript levels within the initial 3 months of therapy. In order to achieve accurate data for high BCR-ABL levels at diagnosis, beta glucuronidase (GUS) was used as a reference gene. Within the German CML-Study IV, samples of 408 imatinib-treated patients were available in a single laboratory for both times, diagnosis and 3 months on treatment. In total, 301 of these were treatment-naïve at sample collection. RESULTS (i) with regard to absolute transcript levels at diagnosis, no predictive cutoff could be identified; (ii) at 3 months, an individual reduction of BCR-ABL transcripts to the 0.35-fold of baseline level (0.46-log reduction, that is, roughly half-log) separated best (high risk: 16% of patients, 5-year overall survival (OS) 83% vs 98%, hazard ratio (HR) 6.3, P=0.001); (iii) at 3 months, a 6% BCR-ABL(IS) cutoff derived from BCR-ABL/GUS yielded a good and sensitive discrimination (high risk: 22% of patients, 5-year OS 85% vs 98%, HR 6.1, P=0.002). Patients at risk of disease progression can be identified precisely by the lack of a half-log reduction of BCR-ABL transcripts at 3 months.
Resumo:
Patients with amnestic mild cognitive impairment are at high risk for developing Alzheimer's disease. Besides episodic memory dysfunction they show deficits in accessing contextual knowledge that further specifies a general spatial navigation task or an executive function (EF) virtual action planning. Virtual reality (VR) environments have already been successfully used in cognitive rehabilitation and show increased potential for use in neuropsychological evaluation allowing for greater ecological validity while being more engaging and user friendly. In our study we employed the in-house platform of virtual action planning museum (VAP-M) and a sample of 25 MCI and 25 controls, in order to investigate deficits in spatial navigation, prospective memory, and executive function. In addition, we used the morphology of late components in event-related potential (ERP) responses, as a marker for cognitive dysfunction. The related measurements were fed to a common classification scheme facilitating the direct comparison of both approaches. Our results indicate that both the VAP-M and ERP averages were able to differentiate between healthy elders and patients with amnestic mild cognitive impairment and agree with the findings of the virtual action planning supermarket (VAP-S). The sensitivity (specificity) was 100% (98%) for the VAP-M data and 87% (90%) for the ERP responses. Considering that ERPs have proven to advance the early detection and diagnosis of "presymptomatic AD," the suggested VAP-M platform appears as an appealing alternative.
Resumo:
Background: Survivors of brain tumors have a high risk for a wide range of cognitive problems. These dysfunctions are caused by the lesion itself and its surgical removal, as well as subsequent treatments (chemo- and/or radiation therapy). Multiple recent studies have indicated that children with brain tumors (BT) might already exhibit cognitive problems at diagnosis, i.e., before the start of any medical treatment. The aim of the present study was to investigate the baseline neuropsychological profile in children with BT compared to children with an oncological diagnosis not involving the central nervous system (CNS). Methods: Twenty children with BT and 27 children with an oncological disease without involvement of the CNS (age range: 6.1 to 16.9 years) were evaluated with an extensive battery of neuropsychological tests tailored to the patient’s age. Furthermore, the child and his/her parent(s) completed self-report questionnaires about emotional functioning and quality of life. In both groups, tests were administered before any therapeutic intervention such as surgery, chemotherapy or irradiation. Groups were comparable with regard to age, gender and socioeconomic status. Results: Compared to the control group, patients with BTs performed significantly worse in tests of working memory, verbal memory and attention (effect sizes between 0.28 and 0.47). In contrast, the areas of perceptual reasoning, processing speed and verbal comprehension were preserved at the time of measurement. Conclusion: Our results highlight the need for cognitive interventions early in the treatment process in order to minimize or prevent academic difficulties as patients return to school.
Resumo:
OBJECTIVE Due to an increased focus on erosive tooth wear (ETW), the European Federation of Conservative Dentistry (EFCD) considered ETW as a relevant topic for generating this consensus report. MATERIALS AND METHODS This report is based on a compilation of the scientific literature, an expert conference, and the approval by the General Assembly of EFCD. RESULTS ETW is a chemical-mechanical process resulting in a cumulative loss of hard dental tissue not caused by bacteria, and it is characterized by loss of the natural surface morphology and contour of the teeth. A suitable index for classification of ETW is the basic erosive wear examination (BEWE). Regarding the etiology, patient-related factors include the pre-disposition to erosion, reflux, vomiting, drinking and eating habits, as well as medications and dietary supplements. Nutritional factors relate to the composition of foods and beverages, e.g., with low pH and high buffer capacity (major risk factors), and calcium concentration (major protective factor). Occupational factors are exposition of workers to acidic liquids or vapors. Preventive management of ETW aims at reducing or stopping the progression of the lesions. Restorative management aims at reducing symptoms of pain and dentine hypersensitivity, or to restore esthetic and function, but it should only be used in conjunction with preventive strategies. CONCLUSIONS Effective management of ETW includes screening for early signs of ETW and evaluating all etiological factors. CLINICAL RELEVANCE ETW is a clinical condition, which calls for the increased attention of the dental community and is a challenge for the cooperation with other medical specialities.
Resumo:
Background. Cryptococcal meningitis is a leading cause of death in people living with human immunodeficiency virus (HIV)/acquired immune deficiency syndrome. The World Health Organizations recommends pre-antiretroviral treatment (ART) cryptococcal antigen (CRAG) screening in persons with CD4 below 100 cells/µL. We assessed the prevalence and outcome of cryptococcal antigenemia in rural southern Tanzania. Methods. We conducted a retrospective study including all ART-naive adults with CD4 <150 cells/µL prospectively enrolled in the Kilombero and Ulanga Antiretroviral Cohort between 2008 and 2012. Cryptococcal antigen was assessed in cryopreserved pre-ART plasma. Cox regression estimated the composite outcome of death or loss to follow-up (LFU) by CRAG status and fluconazole use. Results. Of 750 ART-naive adults, 28 (3.7%) were CRAG-positive, corresponding to a prevalence of 4.4% (23 of 520) in CD4 <100 and 2.2% (5 of 230) in CD4 100-150 cells/µL. Within 1 year, 75% (21 of 28) of CRAG-positive and 42% (302 of 722) of CRAG-negative patients were dead or LFU (P<.001), with no differences across CD4 strata. Cryptococcal antigen positivity was an independent predictor of death or LFU after adjusting for relevant confounders (hazard ratio [HR], 2.50; 95% confidence interval [CI], 1.29-4.83; P = .006). Cryptococcal meningitis occurred in 39% (11 of 28) of CRAG-positive patients, with similar retention-in-care regardless of meningitis diagnosis (P = .8). Cryptococcal antigen titer >1:160 was associated with meningitis development (odds ratio, 4.83; 95% CI, 1.24-8.41; P = .008). Fluconazole receipt decreased death or LFU in CRAG-positive patients (HR, 0.18; 95% CI, .04-.78; P = .022). Conclusions. Cryptococcal antigenemia predicted mortality or LFU among ART-naive HIV-infected persons with CD4 <150 cells/µL, and fluconazole increased survival or retention-in-care, suggesting that targeted pre-ART CRAG screening may decrease early mortality or LFU. A CRAG screening threshold of CD4 <100 cells/µL missed 18% of CRAG-positive patients, suggesting guidelines should consider a higher threshold.
Resumo:
BACKGROUND The impact of early treatment with immunomodulators (IM) and/or TNF antagonists on bowel damage in Crohn's disease (CD) patients is unknown. AIM To assess whether 'early treatment' with IM and/or TNF antagonists, defined as treatment within a 2-year period from the date of CD diagnosis, was associated with development of lesser number of disease complications when compared to 'late treatment', which was defined as treatment initiation after >2 years from the time of CD diagnosis. METHODS Data from the Swiss IBD Cohort Study were analysed. The following outcomes were assessed using Cox proportional hazard modelling: bowel strictures, perianal fistulas, internal fistulas, intestinal surgery, perianal surgery and any of the aforementioned complications. RESULTS The 'early treatment' group of 292 CD patients was compared to the 'late treatment' group of 248 CD patients. We found that 'early treatment' with IM or TNF antagonists alone was associated with reduced risk of bowel strictures [hazard ratio (HR) 0.496, P = 0.004 for IM; HR 0.276, P = 0.018 for TNF antagonists]. Furthermore, 'early treatment' with IM was associated with reduced risk of undergoing intestinal surgery (HR 0.322, P = 0.005), and perianal surgery (HR 0.361, P = 0.042), as well as developing any complication (HR 0.567, P = 0.006). CONCLUSIONS Treatment with immunomodulators or TNF antagonists within the first 2 years of CD diagnosis was associated with reduced risk of developing bowel strictures, when compared to initiating these drugs >2 years after diagnosis. Furthermore, early immunomodulators treatment was associated with reduced risk of intestinal surgery, perianal surgery and any complication.
Resumo:
BACKGROUND The clinical presentation of spondylsodiscitis/spondylitis are manifold. This commonly leads to a period of several months from initial symptoms to final diagnosis. A standardised treatment is difficult. The purpose of this study is to investigate the treatment carried out for patients with spondylodiscitis or spondylitis to develop an individualised standard care for better treatment. PATIENTS AND METHODS Data of 90 patients were retrospective analysed. In particular documented data of the initial examination and the following treatments concerning identification of causes and systematically control of pathogens were examined. RESULTS In 91 % of patients a diagnostically conclusive MRI was conducted. The degree of spondylidiscitis/spondylitis was mainly ASA criteria I or II (86 %). In 96 % of patients different diagnostic methods for identification of pathogens were conducted and documented. RESULTS confirmed the most common pathogens mentioned in the literature. 75 % of patients were treated by surgery. In 93 % of patients an antibiotic treatment was documented. 50 patients (81 %) were successfully healed. CONCLUSION It is important to identify and treat spondylodiscitis/spondylitis as early as possible. Diagnosis by means of blood culture and MRI and treatment of the infection with antibiotics and possibly surgical interventions seem be very suitable, but need to be individualised to each and every patient.
Resumo:
OBJECTIVES To determine life expectancy for older women with breast cancer. DESIGN Prospective longitudinal study with 10 years of follow-up data. SETTING Hospitals or collaborating tumor registries in four geographic regions (Los Angeles, California; Minnesota; North Carolina; Rhode Island). PARTICIPANTS Women aged 65 and older at time of breast cancer diagnosis with Stage I to IIIA disease with measures of self-rated health (SRH) and walking ability at baseline (N = 615; 17% aged ≥80, 52% Stage I, 58% with ≥2 comorbidities). MEASUREMENTS Baseline SRH, baseline self-reported walking ability, all-cause and breast cancer-specific estimated probability of 5- and 10-year survival. RESULTS At the time of breast cancer diagnosis, 39% of women reported poor SRH, and 28% reported limited ability to walk several blocks. The all-cause survival curves appear to separate after approximately 3 years, and the difference in survival probability between those with low SRH and limited walking ability and those with high SRH and no walking ability limitation was significant (0.708 vs 0.855 at 5 years, P ≤ .001; 0.300 vs 0.648 at 10 years, P < .001). There were no differences between the groups in breast cancer-specific survival at 5 and 10 years (P = .66 at 5 years, P = .16 at 10 years). CONCLUSION The combination of low SRH and limited ability to walk several blocks at diagnosis is an important predictor of worse all-cause survival at 5 and 10 years. These self-report measures easily assessed in clinical practice may be an effective strategy to improve treatment decision-making in older adults with cancer.
Resumo:
Femoroacetabular impingement (FAI) is a dynamic conflict of the hip defined by a pathological, early abutment of the proximal femur onto the acetabulum or pelvis. In the past two decades, FAI has received increasing focus in both research and clinical practice as a cause of hip pain and prearthrotic deformity. Anatomical abnormalities such as an aspherical femoral head (cam-type FAI), a focal or general overgrowth of the acetabulum (pincer-type FAI), a high riding greater or lesser trochanter (extra-articular FAI), or abnormal torsion of the femur have been identified as underlying pathomorphologies. Open and arthroscopic treatment options are available to correct the deformity and to allow impingement-free range of motion. In routine practice, diagnosis and treatment planning of FAI is based on clinical examination and conventional imaging modalities such as standard radiography, magnetic resonance arthrography (MRA), and computed tomography (CT). Modern software tools allow three-dimensional analysis of the hip joint by extracting pelvic landmarks from two-dimensional antero-posterior pelvic radiographs. An object-oriented cross-platform program (Hip2Norm) has been developed and validated to standardize pelvic rotation and tilt on conventional AP pelvis radiographs. It has been shown that Hip2Norm is an accurate, consistent, reliable and reproducible tool for the correction of selected hip parameters on conventional radiographs. In contrast to conventional imaging modalities, which provide only static visualization, novel computer assisted tools have been developed to allow the dynamic analysis of FAI pathomechanics. In this context, a validated, CT-based software package (HipMotion) has been introduced. HipMotion is based on polygonal three-dimensional models of the patient’s pelvis and femur. The software includes simulation methods for range of motion, collision detection and accurate mapping of impingement areas. A preoperative treatment plan can be created by performing a virtual resection of any mapped impingement zones both on the femoral head-neck junction, as well as the acetabular rim using the same three-dimensional models. The following book chapter provides a summarized description of current computer-assisted tools for the diagnosis and treatment planning of FAI highlighting the possibility for both static and dynamic evaluation, reliability and reproducibility, and its applicability to routine clinical use.
Resumo:
Intrahepatic cholangiocarcinomas are the second most common primary liver malignancies with an increasing incidence over the past decades. Due to a lack of early symptoms and their aggressive oncobiological behavior, the diagnostic approach is challenging and the outcome remains unsatisfactory with a poor prognosis. Thus, a consistent staging system for a comparison between different therapeutic approaches is needed, but independent predictors for worse survival are still controversial. Currently, four different staging systems are primarily used, which differ in the way they determine the 'T' category. Furthermore, different nomograms and prognostic models have been recently proposed and may be helpful in providing additional information for predicting the prognosis and therefore be helpful in approaching an adequate treatment strategy. This review will discuss the diagnostic approach to intrahepatic cholangiocarcinoma as well as compare and contrast the most current staging systems and prognostic models.
Resumo:
The use of plasma exchange has been described in steroid-refractory central nervous system inflammatory demyelination in adults, but less has been published regarding its use in children and adolescents. We describe 12 children treated with plasma exchange for acute severe central nervous system inflammatory demyelination. The clinical attack leading to plasma exchange included symptomatic spinal cord lesions in 10 and symptomatic brainstem lesions in 2 children. Diagnosis was acute transverse myelitis in 6, relapsing-remitting multiple sclerosis in 5, and acute disseminated encephalomyelitis in 1 child. Adverse events related to plasma exchange necessitating intervention were observed in 3 children. Median Expanded Disability Status Scale score at plasma exchange start was 7.5 (range 4-9.5). At 3 months, 7 children were ambulatory without aid (Expanded Disability Status Scale score of ≤4). This retrospective study suggests that plasma exchange can be effective in ameliorating symptoms in severe pediatric central nervous system inflammatory demyelination, although lack of randomization or control group limits the ability to attribute this outcome entirely to plasma exchange.
Resumo:
Introduction. The HIV/AIDS disease burden disproportionately affects minority populations, specifically African Americans. While sexual risk behaviors play a role in the observed HIV burden, other factors including gender, age, socioeconomics, and barriers to healthcare access may also be contributory. The goal of this study was to determine how far down the HIV/AIDS disease process people of different ethnicities first present for healthcare. The study specifically analyzed the differences in CD4 cell counts at the initial HIV-1 diagnosis with respect to ethnicity. The study also analyzed racial differences in HIV/AIDS risk factors. ^ Methods. This is a retrospective study using data from the Adult Spectrum of HIV Disease (ASD), collected by the City of Houston Department of Health. The ASD database contains information on newly reported HIV cases in the Harris County District Hospitals between 1989 and 2000. Each patient had an initial and a follow-up report. The extracted variables of interest from the ASD data set were CD4 counts at the initial HIV diagnosis, race, gender, age at HIV diagnosis and behavioral risk factors. One-way ANOVA was used to examine differences in baseline CD4 counts at HIV diagnosis between racial/ethnic groups. Chi square was used to analyze racial differences in risk factors. ^ Results. The analyzed study sample was 4767. The study population was 47% Black, 37% White and 16% Hispanic [p<0.05]. The mean and median CD4 counts at diagnosis were 254 and 193 cells per ml, respectively. At the initial HIV diagnosis Blacks had the highest average CD4 counts (285), followed by Whites (233) and Hispanics (212) [p<0.001 ]. These statistical differences, however, were only observed with CD4 counts above 350 [p<0.001], even when adjusted for age at diagnosis and gender [p<0.05]. Looking at risk factors, Blacks were mostly affected by intravenous drug use (IVDU) and heterosexuality, whereas Whites and Hispanics were more affected by male homosexuality [ p<0.05]. ^ Conclusion. (1) There were statistical differences in CD4 counts with respect to ethnicity, but these differences only existed for CD4 counts above 350. These differences however do not appear to have clinical significance. Antithetically, Blacks had the highest CD4 counts followed by Whites and Hispanics. (2) 50% of this study group clinically had AIDS at their initial HIV diagnosis (median=193), irrespective of ethnicity. It was not clear from data analysis if these observations were due to failure of early HIV surveillance, HIV testing policies or healthcare access. More studies need to be done to address this question. (3) Homosexuality and bisexuality were the biggest risk factors for Whites and Hispanics, whereas for Blacks were mostly affected by heterosexuality and IVDU, implying a need for different public health intervention strategies for these racial groups. ^
Resumo:
The purpose of this dissertation was to estimate HIV incidence among the individuals who had HIV tests performed at the Houston Department of Health and Human Services (HDHHS) public health laboratory, and to examine the prevalence of HIV and AIDS concurrent diagnoses among HIV cases reported between 2000 and 2007 in Houston/Harris County. ^ The first study in this dissertation estimated the cumulative HIV incidence among the individuals testing at Houston public health laboratory using Serologic Testing Algorithms for Recent HIV Seroconversion (STARHS) during the two year study period (June 1, 2005 to May 31, 2007). The HIV incidence was estimated using two independently developed statistical imputation methods, one developed by the Centers for Disease Control and Prevention (CDC), and the other developed by HDHHS. Among the 54,394 persons who tested for HIV during the study period, 942 tested HIV positive (positivity rate=1.7%). Of these HIV positives, 448 (48%) were newly reported to the Houston HIV/AIDS Reporting System (HARS) and 417 of these 448 blood specimens (93%) were available for STARHS testing. The STARHS results showed 139 (33%) out of the 417 specimens were newly infected with HIV. Using both the CDC and HDHHS methods, the estimated cumulative HIV incidences over the two-year study period were similar: 862 per 100,000 persons (95% CI: 655-1,070) by CDC method, and 925 per 100,000 persons (95% CI: 908-943) by HDHHS method. Consistent with the national finding, this study found African Americans, and men who have sex with men (MSM) accounted for most of the new HIV infections among the individuals testing at Houston public health laboratory. Using CDC statistical method, this study also found the highest cumulative HIV incidence (2,176 per 100,000 persons [95%CI: 1,536-2,798]) was among those who tested in the HIV counseling and testing sites, compared to the sexually transmitted disease clinics (1,242 per 100,000 persons [95%CI: 871-1,608]) and city health clinics (215 per 100,000 persons [95%CI: 80-353]. This finding suggested the HIV counseling and testing sites in Houston were successful in reaching high risk populations and testing them early for HIV. In addition, older age groups had higher cumulative HIV incidence, but accounted for smaller proportions of new HIV infections. The incidence in the 30-39 age group (994 per 100,000 persons [95%CI: 625-1,363]) was 1.5 times the incidence in 13-29 age group (645 per 100,000 persons [95%CI: 447-840]); the incidences in 40-49 age group (1,371 per 100,000 persons [95%CI: 765-1,977]) and 50 or above age groups (1,369 per 100,000 persons [95%CI: 318-2,415]) were 2.1 times compared to the youngest 13-29 age group. The increased HIV incidence in older age groups suggested that persons 40 or above were still at risk to contract HIV infections. HIV prevention programs should encourage more people who are age 40 and above to test for HIV. ^ The second study investigated concurrent diagnoses of HIV and AIDS in Houston. Concurrent HIV/AIDS diagnosis is defined as AIDS diagnosis within three months of HIV diagnosis. This study found about one-third of the HIV cases were diagnosed with HIV and AIDS concurrently (within three months) in Houston/Harris County. Using multivariable logistic regression analysis, this study found being male, Hispanic, older, and diagnosed in the private sector of care were positively associated with concurrent HIV and AIDS diagnoses. By contrast, men who had sex with men and also used injection drugs (MSM/IDU) were 0.64 times (95% CI: 0.44-0.93) less likely to have concurrent HIV and AIDS diagnoses. A sensitivity analysis comparing difference durations of elapsed time for concurrent HIV and AIDS diagnosis definitions (1-month, 3-month, and 12-month cut-offs) affected the effect size of the odds ratios, but not the direction. ^ The results of these two studies, one describing characteristics of the individuals who were newly infected with HIV, and the other study describing persons who were diagnosed with HIV and AIDS concurrently, can be used as a reference for HIV prevention program planning in Houston/Harris County. ^
Resumo:
Purpose. To determine the risk of late breast cancer recurrence (5 years after treatment) in a population of women diagnosed with early-stage breast cancer at The University of Texas M.D. Anderson Cancer Center (MDACC) between 1985-2000 and to examine the effect of this population’s BMI, smoking history, reproductive history, hormone use, and alcohol intake at the time of diagnosis on risk of late recurrence.^ Methods. Patients included 1,913 members of the Early Stage Breast Cancer Repository recruited at MDACC who had survived without a recurrence for at least five years after their initial diagnosis of early stage breast cancer. Clinical and epidemiological information was ascertained twice on participants during the study—first by medical record abstraction then by patient interview at least five years after receipt of adjuvant treatment. A total of 223 late breast cancer recurrences were captured, with an average follow-up of 10.6 years. Cox proportional hazards models were used to calculate hazard ratios (HR) and 95% confidence intervals (CI). ^