788 resultados para Low-risk gestational trophoblastic neoplasia
Resumo:
Objective Risk scores and accelerated diagnostic protocols can identify chest pain patients with low risk of major adverse cardiac event who could be discharged early from the ED, saving time and costs. We aimed to derive and validate a chest pain score and accelerated diagnostic protocol (ADP) that could safely increase the proportion of patients suitable for early discharge. Methods Logistic regression identified statistical predictors for major adverse cardiac events in a derivation cohort. Statistical coefficients were converted to whole numbers to create a score. Clinician feedback was used to improve the clinical plausibility and the usability of the final score (Emergency Department Assessment of Chest pain Score [EDACS]). EDACS was combined with electrocardiogram results and troponin results at 0 and 2 h to develop an ADP (EDACS-ADP). The score and EDACS-ADP were validated and tested for reproducibility in separate cohorts of patients. Results In the derivation (n = 1974) and validation (n = 608) cohorts, the EDACS-ADP classified 42.2% (sensitivity 99.0%, specificity 49.9%) and 51.3% (sensitivity 100.0%, specificity 59.0%) as low risk of major adverse cardiac events, respectively. The intra-class correlation coefficient for categorisation of patients as low risk was 0.87. Conclusion The EDACS-ADP identified approximately half of the patients presenting to the ED with possible cardiac chest pain as having low risk of short-term major adverse cardiac events, with high sensitivity. This is a significant improvement on similar, previously reported protocols. The EDACS-ADP is reproducible and has the potential to make considerable cost reductions to health systems.
Resumo:
Wildlife harvesting has a long history in Australia, including obvious examples of overexploitation. Not surprisingly, there is scepticism that commercial harvesting can be undertaken sustainably. Kangaroo harvesting has been challenged regularly at Administrative Appeals Tribunals and elsewhere over the past three decades. Initially, the concern from conservation groups was sustainability of the harvest. This has been addressed through regular, direct monitoring that now spans > 30 years and a conservative harvest regime with a low risk of overharvest in the face of uncertainty. Opposition to the harvest now continues from animal rights groups whose concerns have shifted from overall harvest sustainability to side effects such as animal welfare, and changes to community structure, genetic composition and population age structure. Many of these concerns are speculative and difficult to address, requiring expensive data. One concern is that older females are the more successful breeders and teach their daughters optimal habitat and diet selection. The lack of older animals in a harvested population may reduce the fitness of the remaining individuals; implying population viability would also be compromised. This argument can be countered by the persistence of populations under harvesting without any obvious impairment to reproduction. Nevertheless, an interesting question is how age influences reproductive output. In this study, data collected from a number of red kangaroo populations across eastern Australia indicate that the breeding success of older females is up to 7-20% higher than that of younger females. This effect is smaller than that of body condition and the environment, which can increase breeding success by up to 30% and 60% respectively. Average age of mature females in a population may be reduced from 9 to 6 years old, resulting in a potential reduction in breeding success of 3-4%. This appears to be offset in harvested populations by improved condition of females from a reduction in kangaroo density. There is an important recommendation for management. The best insurance policy against overharvest and unwanted side effects is not research, which could be never-ending. Rather, it is a harvest strategy that includes safeguards against uncertainty such as harvest reserves, conservative quotas and regular monitoring. Research is still important in fine tuning that strategy and is most usefully incorporated as adaptive management where it can address the key questions on how populations respond to harvesting.
Resumo:
Diseases caused by the Lancefield group A streptococcus, Streptococcus pyogenes, are amongst the most challenging to clinicians and public health specialists alike. Although severe infections caused by S. pyogenes are relatively uncommon, affecting around 3 per 100,000 of the population per annum in developed countries, the case fatality is high relative to many other infections. Despite a long scientific tradition of studying their occurrence and characteristics, many aspects of their epidemiology remain poorly understood, and potential control measures undefined. Epidemiological studies can play an important role in identifying host, pathogen and environmental factors associated with risk of disease, manifestation of particular syndromes or poor survival. This can be of value in targeting prevention activities, as well directing further basic research, potentially paving the way for the identification of novel therapeutic targets. The formation of a European network, Strep-EURO, provided an opportunity to explore epidemiological patterns across Europe. Funded by the Fifth Framework Programme of the European Commission s Directorate-General for Research (QLK2.CT.2002.01398), the Strep-EURO network was launched in September 2002. Twelve participants across eleven countries took part, led by the University of Lund in Sweden. Cases were defined as patients with S. pyogenes isolated from a normally sterile site, or non-sterile site in combination with clinical signs of streptococcal toxic shock syndrome (STSS). All participating countries undertook prospective enhanced surveillance between 1st January 2003 and 31st December 2004 to identify cases diagnosed during this period. A standardised surveillance dataset was defined, comprising demographic, clinical and risk factor information collected through a questionnaire. Isolates were collected by the national reference laboratories and characterised according to their M protein using conventional serological and emm gene typing. Descriptive statistics and multivariable analyses were undertaken to compare characteristics of cases between countries and identify factors associated with increased risk of death or development of STSS. Crude and age-adjusted rates of infection were calculated for each country where a catchment population could be defined. The project succeeded in establishing the first European surveillance network for severe S. pyogenes infections, with 5522 cases identified over the two years. Analysis of data gathered in the eleven countries yielded important new information on the epidemiology of severe S. pyogenes infections in Europe during the 2000s. Comprehensive epidemiological data on these infections were obtained for the first time from France, Greece and Romania. Incidence estimates identified a general north-south gradient, from high to low. Remarkably similar age-standardised rates were observed among the three Nordic participants, between 2.2 and 2.3 per 100,000 population. Rates in the UK were higher still, 2.9/100,000, elevated by an upsurge in drug injectors. Rates from these northern countries were reasonably close to those observed in the USA and Australia during this period. In contrast, rates of reports in the more central and southern countries (Czech Republic, Romania, Cyprus and Italy) were substantially lower, 0.3 to 1.5 per 100,000 population, a likely reflection of poorer uptake of microbiological diagnostic methods within these countries. Analysis of project data brought some new insights into risk factors for severe S. pyogenes infection, especially the importance of injecting drug users in the UK, with infections in this group fundamentally reshaping the epidemiology of these infections during this period. Several novel findings arose through this work, including the high degree of congruence in seasonal patterns between countries and the seasonal changes in case fatality rates. Elderly patients, those with compromised immune systems, those who developed STSS and those infected with an emm/M78, emm/M5, emm/M3 or emm/M1 were found to be most likely to die as a result of their infection, whereas those diagnosed with cellulitis, septic arthritis, puerperal sepsis or with non-focal infection were associated with low risk of death, as were infections occurring during October. Analysis of augmented data from the UK found use of NSAIDs to be significantly associated with development of STSS, adding further fuel to the debate surrounding the role of NSAIDs in the development of severe disease. As a largely community-acquired infection, occurring sporadically and diffusely throughout the population, opportunities for control of severe infections caused by S. pyogenes remain limited, primarily involving contact chemoprophylaxis where clusters arise. Analysis of UK Strep-EURO data were used to quantify the risk to household contacts of cases, forming the basis of national guidance on the management of infection. Vaccines currently under development could offer a more effective control programme in future. Surveillance of invasive infections caused by S. pyogenes is of considerable public health importance as a means of identifying long and short-term trends in incidence, allowing the need for, or impact of, public health measures to be evaluated. As a dynamic pathogen co-existing among a dynamic population, new opportunities for exploitation of its human host are likely to arise periodically, and as such continued monitoring remains essential.
Resumo:
Hereditary leiomyomatosis and renal cell cancer (HLRCC) is a rare, dominantly inherited tumor predisposition syndrome characterized by benign cutaneous and uterine (ULM) leiomyomas, and sometimes renal cell cancer (RCC). A few cases of uterine leiomyosarcoma (ULMS) have also been reported. Mutations in a nuclear gene encoding fumarate hydratase (FH), an enzyme of the mitochondrial tricarboxylic acid cycle (TCA cycle), underlie HLRCC. As a recessive condition, germline mutations in FH predispose to a neurological defect, FH deficiency (FHD). Hereditary paragangliomatosis (HPGL) is a dominant disorder associated with paragangliomas and pheochromocytomas. Inherited mutations in three genes encoding subunits of succinate dehydrogenase (SDH), also a TCA cycle enzyme, predispose to HPGL. Both FH and SDH seem to act as tumor suppressors. One of the consequences of the TCA cycle defect is abnormal activation of HIF1 pathway ( pseudohypoxia ) in the HLRCC and HPGL tumors. HIF1 drives transcription of genes encoding e.g. angiogenetic factors which can facilitate tumor growth. Recently hypoxia/HIF1 has been suggested to be one of the causes of genetic instability as well. One of the aims of this study was to broaden the clinical definers of HLRCC. To determine the cancer risk and to identify possible novel tumor types associated with FH mutations eight Finnish HLRCC/FHD families were extensively evaluated. The extension of the pedigrees and the Finnish Cancer Registry based tumor search yielded genealogical and cancer data of altogether 868 individuals. The standardized incidence ratio-based comparison of HLRCC/FHD family members with general Finnish population revealed 6.5-fold risk for RCC. Moreover, risk for ULMS was highly increased. However, according to the recent and more stringent diagnosis criteria of ULMS many of the HLRCC uterine tumors previously considered malignant are at present diagnosed as atypical or proliferative ULMs (with a low risk of recurrence). Thus, the formation of ULMS (as presently defined) in HLRCC appears to be uncommon. Though increased incidence was not observed, interestingly the genetic analyses suggested possible association of breast and bladder cancer with loss of FH. Moreover, cancer cases were exceptionally detected in an FHD family. Another clinical finding was the conventional (clear cell) type RCC of a young Spanish HLRCC patient. Conventional RCC is distinct from the types previously observed in this syndrome but according to these results, FH mutation may underlie some of young conventional cancer cases. Secondly, the molecular pathway from defective TCA cycle to tumor formation was intended to clarify. Since HLRCC and HPGL tumors display abnormally activated HIF1, the hypothesis on the link between HIF1/hypoxia and genetic instability was of interest to study in HLRCC and HPGL tumor material. HIF1α (a subunit of HIF1) stabilization was confirmed in the majority of the specimens. However, no repression of MSH2, a protein of DNA mismatch repair system, or microsatellite instability (MSI), an indicator of genetic instability, was observed. Accordingly, increased instability seems not to play a role in the tumorigenesis of pseudohypoxic TCA cycle-deficient tumors. Additionally, to study the putative alternative functions of FH, a recently identified alternative FH transcript (FHv) was characterized. FHv was found to contain instead of exon 1, an alternative exon 1b. Differential subcellular distribution, lack of FH enzyme activity, low mRNA expression compared to FH, and induction by cellular stress suggest FHv to have a role distinct from FH, for example in apoptosis or survival. However, the physiological significance of FHv requires further elucidation.
Resumo:
Objectives: We sought to characterise the demographics, length of admission, final diagnoses, long-term outcome and costs associated with the population who presented to an Australian emergency department (ED) with symptoms of possible acute coronary syndrome (ACS). Design, setting and participants: Prospectively collected data on ED patients presenting with suspected ACS between November 2008 and February 2011 was used, including data on presentation and at 30 days after presentation. Information on patient disposition, length of stay and costs incurred was extracted from hospital administration records. Main outcome measures: Primary outcomes were mean and median cost and length of hospital stay. Secondary outcomes were diagnosis of ACS, other cardiovascular conditions or non-cardiovascular conditions within 30 days of presentation. Results: An ACS was diagnosed in 103 (11.1%) of the 926 patients recruited. 193 patients (20.8%) were diagnosed with other cardiovascular-related conditions and 622 patients (67.2%) had non-cardiac-related chest pain. ACS events occurred in 0 and 11 (1.9%) of the low-risk and intermediate-risk groups, respectively. Ninety-two (28.0%) of the 329 high-risk patients had an ACS event. Patients with a proven ACS, high-grade atrioventricular block, pulmonary embolism and other respiratory conditions had the longest length of stay. The mean cost was highest in the ACS group ($13 509; 95% CI, $11 794–$15 223) followed by other cardiovascular conditions ($7283; 95% CI, $6152–$8415) and non-cardiovascular conditions ($3331; 95% CI, $2976–$3685). Conclusions: Most ED patients with symptoms of possible ACS do not have a cardiac cause for their presentation. The current guideline-based process of assessment is lengthy, costly and consumes significant resources. Investigation of strategies to shorten this process or reduce the need for objective cardiac testing in patients at intermediate risk according to the National Heart Foundation and Cardiac Society of Australia and New Zealand guideline is required.
Resumo:
Trichinella surveillance in wildlife relies on muscle digestion of large samples which are logistically difficult to store and transport in remote and tropical regions as well as labour-intensive to process. Serological methods such as enzyme-linked immunosorbent assays (ELISAs) offer rapid, cost-effective alternatives for surveillance but should be paired with additional tests because of the high false-positive rates encountered in wildlife. We investigated the utility of ELISAs coupled with Western blot (WB) in providing evidence of Trichinella exposure or infection in wild boar. Serum samples were collected from 673 wild boar from a high- and low-risk region for Trichinella introduction within mainland Australia, which is considered Trichinella-free. Sera were examined using both an 'in-house' and a commercially available indirect-ELISA that used excretory secretory (E/S) antigens. Cut-off values for positive results were determined using sera from the low-risk population. All wild boar from the high-risk region (352) and 139/321 (43.3%) of the wild boar from the low-risk region were tested by artificial digestion. Testing by Western blot using E/S antigens, and a Trichinella-specific real-time PCR was also carried out on all ELISA-positive samples. The two ELISAs correctly classified all positive controls as well as one naturally infected wild boar from Gabba Island in the Torres Strait. In both the high- and low-risk populations, the ELISA results showed substantial agreement (k-value = 0.66) that increased to very good (k-value = 0.82) when WB-positive only samples were compared. The results of testing sera collected from the Australian mainland showed the Trichinella seroprevalence was 3.5% (95% C.I. 0.0-8.0) and 2.3% (95% C.I. 0.0-5.6) using the in-house and commercial ELISA coupled with WB respectively. These estimates were significantly higher (P < 0.05) than the artificial digestion estimate of 0.0% (95% C.I. 0.0-1.1). Real-time PCR testing of muscle from seropositive animals did not detect Trichinella DNA in any mainland animals, but did reveal the presence of a second larvae-positive wild boar on Gabba Island, supporting its utility as an alternative, highly sensitive method in muscle examination. The serology results suggest Australian wildlife may have been exposed to Trichinella parasites. However, because of the possibility of non-specific reactions with other parasitic infections, more work using well-defined cohorts of positive and negative samples is required. Even if the specificity of the ELISAs is proven to be low, their ability to correctly classify the small number of true positive sera in this study indicates utility in screening wild boar populations for reactive sera which can be followed up with additional testing. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Q fever is a vaccine-preventable disease; despite this, high annual notification numbers are still recorded in Australia. We have previously shown seroprevalence in Queensland metropolitan regions is approaching that of rural areas. This study investigated the presence of nucleic acid from Coxiella burnetii, the agent responsible for Q fever, in a number of animal and environmental samples collected throughout Queensland, to identify potential sources of human infection. Samples were collected from 129 geographical locations and included urine, faeces and whole blood from 22 different animal species; 45 ticks were removed from two species, canines and possums; 151 soil samples; 72 atmospheric dust samples collected from two locations and 50 dust swabs collected from domestic vacuum cleaners. PCR testing was performed targeting the IS1111 and COM1 genes for the specific detection of C.burnetii DNA. There were 85 detections from 1318 animal samples, giving a detection rate for each sample type ranging from 2.1 to 6.8%. Equine samples produced a detection rate of 11.9%, whilst feline and canine samples showed detection rates of 7.8% and 5.2%, respectively. Native animals had varying detection rates: pooled urines from flying foxes had 7.8%, whilst koalas had 5.1%, and 6.7% of ticks screened were positive. The soil and dust samples showed the presence of C.burnetii DNA ranging from 2.0 to 6.9%, respectively. These data show that specimens from a variety of animal species and the general environment provide a number of potential sources for C.burnetii infections of humans living in Queensland. These previously unrecognized sources may account for the high seroprevalence rates seen in putative low-risk communities, including Q fever patients with no direct animal contact and those subjects living in a low-risk urban environment.
Resumo:
We describe a novel approach to treatment planning for focal brachytherapy utilizing a biologically based inverse optimization algorithm and biological imaging to target an ablative dose at known regions of significant tumour burden and a lower, therapeutic dose to low risk regions.
Resumo:
The National Energy Efficient Building Project (NEEBP) Phase One report, published in December 2014, investigated “process issues and systemic failures” in the administration of the energy performance requirements in the National Construction Code. It found that most stakeholders believed that under-compliance with these requirements is widespread across Australia, with similar issues being reported in all states and territories. The report found that many different factors were contributing to this outcome and, as a result, many recommendations were offered that together would be expected to remedy the systemic issues reported. To follow up on this Phase 1 report, three additional projects were commissioned as part of Phase 2 of the overall NEEBP project. This Report deals with the development and piloting of an Electronic Building Passport (EBP) tool – a project undertaken jointly by pitt&sherry and a team at the Queensland University of Technology (QUT) led by Dr Wendy Miller. The other Phase 2 projects cover audits of Class 1 buildings and issues relating to building alterations and additions. The passport concept aims to provide all stakeholders with (controlled) access to the key documentation and information that they need to verify the energy performance of buildings. This trial project deals with residential buildings but in principle could apply to any building type. Nine councils were recruited to help develop and test a pilot electronic building passport tool. The participation of these councils – across all states – enabled an assessment of the extent to which these councils are currently utilising documentation; to track the compliance of residential buildings with the energy performance requirements in the National Construction Code (NCC). Overall we found that none of the participating councils are currently compiling all of the energy performance-related documentation that would demonstrate code compliance. The key reasons for this include: a major lack of clarity on precisely what documentation should be collected; cost and budget pressures; low public/stakeholder demand for the documentation; and a pragmatic judgement that non-compliance with any regulated documentation requirements represents a relatively low risk for them. Some councils reported producing documentation, such as certificates of final completion, only on demand, for example. Only three of the nine council participants reported regularly conducting compliance assessments or audits utilising this documentation and/or inspections. Overall we formed the view that documentation and information tracking processes operating within the building standards and compliance system are not working to assure compliance with the Code’s energy performance requirements. In other words the Code, and its implementation under state and territory regulatory processes, is falling short as a ‘quality assurance’ system for consumers. As a result it is likely that the new housing stock is under-performing relative to policy expectations, consuming unnecessary amounts of energy, imposing unnecessarily high energy bills on occupants, and generating unnecessary greenhouse gas emissions. At the same time, Councils noted that the demand for documentation relating to building energy performance was low. All the participant councils in the EBP pilot agreed that documentation and information processes need to work more effectively if the potential regulatory and market drivers towards energy efficient homes are to be harnessed. These findings are fully consistent with the Phase 1 NEEBP report. It was also agreed that an EBP system could potentially play an important role in improving documentation and information processes. However, only one of the participant councils indicated that they might adopt such a system on a voluntary basis. The majority felt that such a system would only be taken up if it were: - A nationally agreed system, imposed as a mandatory requirement under state or national regulation; - Capable of being used by multiple parties including councils, private certifiers, building regulators, builders and energy assessors in particular; and - Fully integrated into their existing document management systems, or at least seamlessly compatible rather than a separate, unlinked tool. Further, we note that the value of an EBP in capturing statistical information relating to the energy performance of buildings would be much greater if an EBP were adopted on a nationally consistent basis. Councils were clear that a key impediment to the take up of an EBP system is that they are facing very considerable budget and staffing challenges. They report that they are often unable to meet all community demands from the resources available to them. Therefore they are unlikely to provide resources to support the roll out of an EBP system on a voluntary basis. Overall, we conclude from this pilot that the public good would be well served if the Australian, state and territory governments continued to develop and implement an Electronic Building Passport system in a cost-efficient and effective manner. This development should occur with detailed input from building regulators, the Australian Building Codes Board (ABCB), councils and private certifiers in the first instance. This report provides a suite of recommendations (Section 7.2) designed to advance the development and guide the implementation of a national EBP system.
Resumo:
- Objective To compare health service cost and length of stay between a traditional and an accelerated diagnostic approach to assess acute coronary syndromes (ACS) among patients who presented to the emergency department (ED) of a large tertiary hospital in Australia. - Design, setting and participants This historically controlled study analysed data collected from two independent patient cohorts presenting to the ED with potential ACS. The first cohort of 938 patients was recruited in 2008–2010, and these patients were assessed using the traditional diagnostic approach detailed in the national guideline. The second cohort of 921 patients was recruited in 2011–2013 and was assessed with the accelerated diagnostic approach named the Brisbane protocol. The Brisbane protocol applied early serial troponin testing for patients at 0 and 2 h after presentation to ED, in comparison with 0 and 6 h testing in traditional assessment process. The Brisbane protocol also defined a low-risk group of patients in whom no objective testing was performed. A decision tree model was used to compare the expected cost and length of stay in hospital between two approaches. Probabilistic sensitivity analysis was used to account for model uncertainty. - Results Compared with the traditional diagnostic approach, the Brisbane protocol was associated with reduced expected cost of $1229 (95% CI −$1266 to $5122) and reduced expected length of stay of 26 h (95% CI −14 to 136 h). The Brisbane protocol allowed physicians to discharge a higher proportion of low-risk and intermediate-risk patients from ED within 4 h (72% vs 51%). Results from sensitivity analysis suggested the Brisbane protocol had a high chance of being cost-saving and time-saving. - Conclusions This study provides some evidence of cost savings from a decision to adopt the Brisbane protocol. Benefits would arise for the hospital and for patients and their families.
Resumo:
Extraintestinal pathogenic Escherichia coli (ExPEC) represent a diverse group of strains of E. coli, which infect extraintestinal sites, such as the urinary tract, the bloodstream, the meninges, the peritoneal cavity, and the lungs. Urinary tract infections (UTIs) caused by uropathogenic E. coli (UPEC), the major subgroup of ExPEC, are among the most prevalent microbial diseases world wide and a substantial burden for public health care systems. UTIs are responsible for serious morbidity and mortality in the elderly, in young children, and in immune-compromised and hospitalized patients. ExPEC strains are different, both from genetic and clinical perspectives, from commensal E. coli strains belonging to the normal intestinal flora and from intestinal pathogenic E. coli strains causing diarrhea. ExPEC strains are characterized by a broad range of alternate virulence factors, such as adhesins, toxins, and iron accumulation systems. Unlike diarrheagenic E. coli, whose distinctive virulence determinants evoke characteristic diarrheagenic symptoms and signs, ExPEC strains are exceedingly heterogeneous and are known to possess no specific virulence factors or a set of factors, which are obligatory for the infection of a certain extraintestinal site (e. g. the urinary tract). The ExPEC genomes are highly diverse mosaic structures in permanent flux. These strains have obtained a significant amount of DNA (predictably up to 25% of the genomes) through acquisition of foreign DNA from diverse related or non-related donor species by lateral transfer of mobile genetic elements, including pathogenicity islands (PAIs), plasmids, phages, transposons, and insertion elements. The ability of ExPEC strains to cause disease is mainly derived from this horizontally acquired gene pool; the extragenous DNA facilitates rapid adaptation of the pathogen to changing conditions and hence the extent of the spectrum of sites that can be infected. However, neither the amount of unique DNA in different ExPEC strains (or UPEC strains) nor the mechanisms lying behind the observed genomic mobility are known. Due to this extreme heterogeneity of the UPEC and ExPEC populations in general, the routine surveillance of ExPEC is exceedingly difficult. In this project, we presented a novel virulence gene algorithm (VGA) for the estimation of the extraintestinal virulence potential (VP, pathogenicity risk) of clinically relevant ExPECs and fecal E. coli isolates. The VGA was based on a DNA microarray specific for the ExPEC phenotype (ExPEC pathoarray). This array contained 77 DNA probes homologous with known (e.g. adhesion factors, iron accumulation systems, and toxins) and putative (e.g. genes predictably involved in adhesion, iron uptake, or in metabolic functions) ExPEC virulence determinants. In total, 25 of DNA probes homologous with known virulence factors and 36 of DNA probes representing putative extraintestinal virulence determinants were found at significantly higher frequency in virulent ExPEC isolates than in commensal E. coli strains. We showed that the ExPEC pathoarray and the VGA could be readily used for the differentiation of highly virulent ExPECs both from less virulent ExPEC clones and from commensal E. coli strains as well. Implementing the VGA in a group of unknown ExPECs (n=53) and fecal E. coli isolates (n=37), 83% of strains were correctly identified as extraintestinal virulent or commensal E. coli. Conversely, 15% of clinical ExPECs and 19% of fecal E. coli strains failed to raster into their respective pathogenic and non-pathogenic groups. Clinical data and virulence gene profiles of these strains warranted the estimated VPs; UPEC strains with atypically low risk-ratios were largely isolated from patients with certain medical history, including diabetes mellitus or catheterization, or from elderly patients. In addition, fecal E. coli strains with VPs characteristic for ExPEC were shown to represent the diagnostically important fraction of resident strains of the gut flora with a high potential of causing extraintestinal infections. Interestingly, a large fraction of DNA probes associated with the ExPEC phenotype corresponded to novel DNA sequences without any known function in UTIs and thus represented new genetic markers for the extraintestinal virulence. These DNA probes included unknown DNA sequences originating from the genomic subtractions of four clinical ExPEC isolates as well as from five novel cosmid sequences identified in the UPEC strains HE300 and JS299. The characterized cosmid sequences (pJS332, pJS448, pJS666, pJS700, and pJS706) revealed complex modular DNA structures with known and unknown DNA fragments arranged in a puzzle-like manner and integrated into the common E. coli genomic backbone. Furthermore, cosmid pJS332 of the UPEC strain HE300, which carried a chromosomal virulence gene cluster (iroBCDEN) encoding the salmochelin siderophore system, was shown to be part of a transmissible plasmid of Salmonella enterica. Taken together, the results of this project pointed towards the assumptions that first, (i) homologous recombination, even within coding genes, contributes to the observed mosaicism of ExPEC genomes and secondly, (ii) besides en block transfer of large DNA regions (e.g. chromosomal PAIs) also rearrangements of small DNA modules provide a means of genomic plasticity. The data presented in this project supplemented previous whole genome sequencing projects of E. coli and indicated that each E. coli genome displays a unique assemblage of individual mosaic structures, which enable these strains to successfully colonize and infect different anatomical sites.
Resumo:
For optimal treatment planning, a thorough assessment of the metastatic status of mucosal squamous cell carcinoma of the head and neck (HNSCC) is required. Current imaging methods do not allow the recognition of all patients with metastatic disease. Therefore, elective treatment of the cervical lymph nodes is usually given to patients in whom the risk of subclinical metastasis is estimated to exceed 15-20%. The objective of this study was to improve the pre-treatment evaluation of patients diagnosed with HNSCC. Particularly, we aimed at improving the identification of patients who will benefit from elective neck treatment. Computed tomography (CT) of the chest and abdomen was performed prospectively for 100 patients diagnosed with HNSCC. The findings were analysed to clarify the indications for this examination in this patient group. CT of the chest influenced the treatment approach in 3% of patients, while CT of the abdomen did not reveal any significant findings. Our results suggest that CT of the chest and abdomen is not indicated routinely for patients with newly diagnosed HNSCC but can be considered in selected cases. Retrospective analysis of 80 patients treated for early stage squamous cell carcinoma of the oral tongue was performed to investigate the potential benefits of elective neck treatment and to examine whether histopathological features of the primary tumour could be used in the prediction of occult metastases, local recurrence, or/and poor survival. Patients who had received elective neck treatment had significantly fewer cervical recurrences during the follow-up when compared to those who only had close observation of the cervical lymph nodes. Elective neck treatment did not result in survival benefit, however. Of the histopathological parameters examined, depth of infiltration and pT-category (representing tumour diameter) predicted occult cervical metastasis, but only the pT-category predicted local recurrence. Depth of infiltration can be used in the identification of at risk patients but no clear cut-off value separating high-risk and low-risk patients was found. None of the histopathological parameters examined predicted survival. Sentinel lymph node (SLN) biopsy was studied as a means of diagnosing patients with subclinical cervical metastases. SLN biopsy was applied to 46 patients who underwent elective neck dissection for oral squamous cell carcinoma. In addition, SLN biopsy was applied to 13 patients with small oral cavity tumours who were not intended to undergo elective neck dissection because of low risk of occult metastasis. The sensitivity of SLN biopsy for finding subclinical cervical metastases was found to be 67%, when SLN status was compared to the metastatic status of the rest of the neck dissection specimen. Of the patients not planned to have elective neck dissection, SLN biopsy revealed cervical metastasis in 15% of the patients. Our results suggest that SLN biopsy can not yet entirely replace elective neck dissection in the treatment of oral cancer, but it seems beneficial for patients with low risk of metastasis who are not intended for elective neck treatment according to current treatment protocols.
Resumo:
Aim: To characterize the inhibition of platelet function by paracetamol in vivo and in vitro, and to evaluate the possible interaction of paracetamol and diclofenac or valdecoxib in vivo. To assess the analgesic effect of the drugs in an experimental pain model. Methods: Healthy volunteers received increasing doses of intravenous paracetamol (15, 22.5 and 30 mg/kg), or the combination of paracetamol 1 g and diclofenac 1.1 mg/kg or valdecoxib 40 mg (as the pro-drug parecoxib). Inhibition of platelet function was assessed with photometric aggregometry, the platelet function analyzer (PFA-100), and release of thromboxane B2. Analgesia was assessed with the cold pressor test. The inhibition coefficient of platelet aggregation by paracetamol was determined as well as the nature of interaction between paracetamol and diclofenac by an isobolographic analysis in vitro. Results: Paracetamol inhibited platelet aggregation and TxB2-release dose-dependently in volunteers and concentration-dependently in vitro. The inhibition coefficient was 15.2 mg/L (95% CI 11.8 - 18.6). Paracetamol augmented the platelet inhibition by diclofenac in vivo, and the isobole showed that this interaction is synergistic. Paracetamol showed no interaction with valdecoxib. PFA-100 appeared insensitive in detecting platelet dysfunction by paracetamol, and the cold-pressor test showed no analgesia. Conclusions: Paracetamol inhibits platelet function in vivo and shows synergism when combined with diclofenac. This effect may increase the risk of bleeding in surgical patients with an impaired haemostatic system. The combination of paracetamol and valdecoxib may be useful in patients with low risk for thromboembolism. The PFA-100 seems unsuitable for detection of platelet dysfunction and the cold-pressor test seems unsuitable for detection of analgesia by paracetamol.
Resumo:
Clozapine is the most effective drug in treating therapy-resistant schizophrenia and may even be superior to all other antipsychotics. However, its use is limited by a high incidence (approximately 0.8%) of a severe hematological side effect, agranulocytosis. The exact molecular mechanism(s) of clozapine-induced agranulocytosis is still unknown. We investigated the mechanisms behind responsiveness to clozapine therapy and the risk of developing agranulocytosis by performing an HLA (human leukocyte antigens) association study in patients with schizophrenia. The first group comprised patients defined by responsiveness to first-generation antipsychotics (FGAs) (n= 19). The second group was defined by a lack of response to FGAs but responsiveness to clozapine (n=19). The third group of patients had a history of clozapine-induced granulocytopenia or agranulocytosis (n=26). Finnish healthy blood donors served as controls (n= 120). We found a significantly increased frequency of HLA-A1 among patients who were refractory to FGAs but responsive to clozapine. We also found that the frequency of HLA-A1 was low in patients with clozapine-induced neutropenia or agranulocytosis. These results suggest that HLA-A1 may predict a good therapeutic outcome and a low risk of agranulocytosis and therefore HLA typing may aid in the selection of patients for clozapine therapy. Furthermore, in a subgroup of schizophrenia, HLA-A1 may be in linkage disequilibrium with some vulnerability genes in the MHC (major histocompatibility complex) region on chromosome 6. These genes could be involved in antipsychotic drug response and clozapine-induced agranulocytosis. In addition, we investigated the effect of clozapine on gene expression in granulocytes by performing a microarray analysis on blood leukocytes of 8 schizophrenic patients who had started clozapine therapy for the first time. We identified an altered expression in 4 genes implicated in the maturation or apoptosis of granulocytes: MPO (myeloperoxidase precursor), MNDA (myeloid cell nuclear differentiation antigen), FLT3LG (Fms-related tyrosine kinase 3 ligand) and ITGAL (antigen CD11A, lymphocyte function-associated antigen 1). The altered expression of these genes following clozapine administration may suggest their involvement in clozapine-induced agranulocytosis. Finally, we investigated whether or not normal human bone marrow mesenchymal stromal cells (MSC) are sensitive to clozapine. We treated cultures of human MSCs and human skin fibroblasts with 10 µM of unmodified clozapine and with clozapine bioactivated by oxidation. We found that, independent of bioactivation, clozapine was cytotoxic to MSCs in primary culture, whereas clozapine at the same concentration stimulated the growth of human fibroblasts. This suggests that direct cytotoxicity to MSCs is one possible mechanism by which clozapine induces agranulocytosis.
Resumo:
Stroke is the second leading cause of death and the leading cause of disability worldwide. Of all strokes, up to 80% to 85% are ischemic, and of these, less than 10% occur in young individuals. Stroke in young adults—most often defined as stroke occurring under the age of 45 or 50—can be particularly devastating due to long expected life-span ahead and marked socio-economic consequences. Current basic knowledge on ischemic stroke in this age group originates mostly from rather small and imprecise patient series. Regarding emergency treatment, systematic data on use of intravenous thrombolysis are absent. For this Thesis project, we collected detailed clinical and radiological data on all consecutive patients aged 15 to 49 with first-ever ischemic stroke between 1994 and 2007 treated at the Helsinki University Central Hospital. The aims of the study were to define demographic characteristics, risk factors, imaging features, etiology, and long-term mortality and its predictors in this patient population. We additionally sought to investigate, whether intravenous thrombolysis is safe and beneficial for the treatment of acute ischemic stroke in the young. Of our 1008 patients, most were males (ratio 1.7:1), who clearly outnumbered females after the age of 44, but females were preponderant among those aged <30. Occurrence increased exponentially. The most frequent risk factors were dyslipidemia (60%), smoking (44%), and hypertension (39%). Risk factors accumulated in males and along aging. Cardioembolism (20%) and cervicocerebral artery dissection (15%) were the most frequent etiologic subgroups, followed by small-vessel disease (14%), and large-artery atherosclerosis (8%). A total of 33% had undetermined etiology. Left hemisphere strokes were more common in general. Posterior circulation infarcts were more common among those aged <45. Multiple brain infarcts were present in 23% of our patients, 13% had silent infarcts, and 5% had leukoaraiosis. Of those with silent brain infarcts, majority (54%) had only a single lesion, and most of the silent strokes were located in basal ganglia (39%) and subcortical regions (21%). In a logistic regression analysis, type 1 diabetes mellitus in particular predicted the presence of both silent brain infarcts (odds ratio 5.78, 95% confidence interval 2.37-14.10) and leukoaraiosis (9.75; 3.39-28.04). We identified 48 young patients with hemispheric ischemic stroke treated with intravenous tissue plasminogen activator, alteplase. For comparisons, we searched 96 untreated control patients matched by age, gender, and admission stroke severity, as well as 96 alteplase-treated older controls aged 50 to 79 matched by gender and stroke severity. Alteplase-treated young patients recovered more often completely (27% versus 10%, P=0.010) or had only mild residual symptoms (40% versus 22%, P=0.025) compared to age-matched controls. None of the alteplase-treated young patients had symptomatic intracerebral hemorrhage or died within 3-month follow-up. Overall long-term mortality was low in our patient population. Cumulative mortality risks were 2.7% (95% confidence interval 1.5-3.9%) at 1 month, 4.7% (3.1-6.3%) at 1 year, and 10.7% (9.9-11.5%) at 5 years. Among the 30-day survivors who died during the 5-year follow-up, more than half died due to vascular causes. Malignancy, heart failure, heavy drinking, preceding infection, type 1 diabetes, increasing age, and large-artery atherosclerosis causing the index stroke independently predicted 5-year mortality when adjusted for age, gender, relevant risk factors, stroke severity, and etiologic subtype. In sum, young adults with ischemic stroke have distinct demographic patterns and they frequently harbor traditional vascular risk factors. Etiology in the young is extremely diverse, but in as many as one-third the exact cause remains unknown. Silent brain infarcts and leukoaraiosis are not uncommon brain imaging findings in these patients and should not be overlooked due to their potential prognostic relevance. Outcomes in young adults with hemispheric ischemic stroke can safely be improved with intravenous thrombolysis. Furthermore, despite their overall low risk of death after ischemic stroke, several easily recognizable factors—of which most are modifiable—predict higher mortality in the long term in young adults.