32 resultados para Patient monitoring
em Helda - Digital Repository of University of Helsinki
Resumo:
The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.
Resumo:
Matrix metalloproteinase (MMP) -8, collagenase-2, is a key mediator of irreversible tissue destruction in chronic periodontitis and detectable in gingival crevicular fluid (GCF). MMP-8 mostly originates from neutrophil leukocytes, the first line of defence cells which exist abundantly in GCF, especially in inflammation. MMP-8 is capable of degrading almost all extra-cellular matrix and basement membrane components and is especially efficient against type I collagen. Thus the expression of MMP-8 in GCF could be valuable in monitoring the activity of periodontitis and possibly offers a diagnostic means to predict progression of periodontitis. In this study the value of MMP-8 detection from GCF in monitoring of periodontal health and disease was evaluated with special reference to its ability to differentiate periodontal health and different disease states of the periodontium and to recognise the progression of periodontitis, i.e. active sites. For chair-side detection of MMP-8 from the GCF or peri-implant sulcus fluid (PISF) samples, a dip-stick test based on immunochromatography involving two monoclonal antibodies was developed. The immunoassay for the detection of MMP-8 from GCF was found to be more suitable for monitoring of periodontitis than detection of GCF elastase concentration or activity. Periodontally healthy subjects and individuals suffering of gingivitis or of periodontitis could be differentiated by means of GCF MMP-8 levels and dipstick testing when the positive threshold value of the MMP-8 chair-side test was set at 1000 µg/l. MMP-8 dipstick test results from periodontally healthy and from subjects with gingivitis were mainly negative while periodontitis patients sites with deep pockets ( 5 mm) and which were bleeding on probing were most often test positive. Periodontitis patients GCF MMP-8 levels decreased with hygiene phase periodontal treatment (scaling and root planing, SRP) and even reduced during the three month maintenance phase. A decrease in GCF MMP-8 levels could be monitored with the MMP-8 test. Agreement between the test stick and the quantitative assay was very good (κ = 0.81) and the test provided a baseline sensitivity of 0.83 and specificity of 0.96. During the 12-month longitudinal maintenance phase, periodontitis patients progressing sites (sites with an increase in attachment loss ≥ 2 mm during the maintenance phase) had elevated GCF MMP-8 levels compared with stable sites. General mean MMP-8 concentrations in smokers (S) sites were lower than in non-smokers (NS) sites but in progressing S and NS sites concentrations were at an equal level. Sites with exceptionally and repeatedly elevated MMP-8 concentrations during the maintenance phase were clustered in smoking patients with poor response to SRP (refractory patients). These sites especially were identified by the MMP-8 test. Subgingival plaque samples from periodontitis patients deep periodontal pockets were examined by polymerase chain reaction (PCR) to find out if periodontal lesions may serve as a niche for Chlamydia pneumoniae. Findings were compared with the clinical periodontal parameters and GCF MMP-8 levels to determine the correlation with periodontal status. Traces of C. pneumoniae were identified from one periodontitis patient s pooled subgingival plaque sample by means of PCR. After periodontal treatment (SRP) the sample was negative for C. pneumoniae. Clinical parameters or biomarkers (MMP-8) of the patient with the positive C. pneumoniae finding did not differ from other study patients. In this study it was concluded that MMP-8 concentrations in GCF of sites from periodontally healthy individuals, subjects with gingivitis or with periodontitis are at different levels. The cut-off value of the developed MMP-8 test is at an optimal level to differentiate between these conditions and can possibly be utilised in identification of individuals at the risk of the transition of gingivitis to periodontitis. In periodontitis patients, repeatedly elevated GCF MMP-8 concentrations may indicate sites at risk of progression of periodontitis as well as patients with poor response to conventional periodontal treatment (SRP). This can be monitored by MMP-8 testing. Despite the lower mean GCF MMP-8 concentrations in smokers, a fraction of smokers sites expressed very high MMP-8 concentrations together with enhanced periodontal activity and could be identified with MMP-8 specific chair-side test. Deep periodontal lesions may be niches for non-periodontopathogenic micro-organisms with systemic effects like C. pneumoniae and possibly play a role in the transmission from one subject to another.
Resumo:
Thrombophilia (TF) predisposes both to venous and arterial thrombosis at a young age. TF may also impact the thrombosis or stenosis of hemodialysis (HD) vascular access in patients with end-stage renal disease (ESRD). When involved in severe thrombosis TF may associate with inappropriate response to anticoagulation. Lepirudin, a potent direct thrombin inhibitor (DTI), indicated for heparin-induced thrombocytopenia-related thrombosis, could offer a treatment alternative in TF. Monitoring of narrow-ranged lepirudin demands new insights also in laboratory. The above issues constitute the targets in this thesis. We evaluated the prevalence of TF in patients with ESRD and its impact upon thrombosis- or stenosis-free survival of the vascular access. Altogether 237 ESRD patients were prospectively screened for TF and thrombogenic risk factors prior to HD access surgery in 2002-2004 (mean follow-up of 3.6 years). TF was evident in 43 (18%) of the ESRD patients, more often in males (23 vs. 9%, p=0.009). Known gene mutations of FV Leiden and FII G20210A occurred in 4%. Vascular access sufficiently matured in 226 (95%). The 1-year thrombosis- and stenosis-free access survival was 72%. Female gender (hazards ratio, HR, 2.5; 95% CI 1.6-3.9) and TF (HR 1.9, 95% CI 1.1-3.3) were independent risk factors for the shortened thrombosis- and stenosis-free survival. Additionally, TF or thrombogenic background was found in relatively young patients having severe thrombosis either in hepatic veins (Budd-Chiari syndrome, BCS, one patient) or inoperable critical limb ischemia (CLI, six patients). Lepirudin was evaluated in an off-label setting in the severe thrombosis after inefficacious traditional anticoagulation without other treatment options except severe invasive procedures, such as lower extremity amputation. Lepirudin treatments were repeatedly monitored clinically and with laboratory assessments (e.g. activated partial thromboplastin time, APTT). Our preliminary studies with lepirudin in thrombotic calamities appeared safe, and no bleeds occurred. An effective DTI lepirudin calmed thrombosis as all patients gradually recovered. Only one limb amputation was performed 3 years later during the follow-up (mean 4 years). Furthermore, we aimed to overcome the limitations of APTT and confounding effects of warfarin (INR of 1.5-3.9) and lupus anticoagulant (LA). Lepirudin responses were assessed in vitro by five specific laboratory methods. Ecarin chromogenic assay (ECA) or anti-Factor IIa (anti-FIIa) correlated precisely (r=0.99) with each other and with spiked lepirudin in all plasma pools: normal, warfarin, and LA-containing plasma. In contrast, in the presence of warfarin and LA both APTT and prothrombinase-induced clotting time (PiCT®) were limited by non-linear and imprecise dose responses. As a global coagulation test APTT is useful in parallel to the precise chromogenic methods ECA or Anti-FIIa in challenging clinical situations. Lepirudin treatment requires multidisciplinary approach to ensure appropriate patient selection, interpretation of laboratory monitoring, and treatment safety. TF seemed to be associated with complicated thrombotic events, in venous (BCS), arterial (CLI), and vascular access systems. TF screening should be aimed to patients with repeated access complications or prior unprovoked thromboembolic events. Lepirudin inhibits free and clot-bound thrombin which heparin fails to inhibit. Lepirudin seems to offer a potent and safe option for treatment of severe thrombosis. Multi-centered randomized trials are necessary to assess the possible management of complicated thrombotic events with DTIs like lepirudin and seek prevention options against access complications.
Resumo:
Modern drug discovery gives rise to a great number of potential new therapeutic agents, but in some cases the efficient treatment of patient may not be achieved because the delivery of active compounds to the target site is insufficient. Thus, drug delivery is one of the major challenges in current pharmaceutical research. Numerous nanoparticle-based drug carriers, e.g. liposomes, have been developed for enhanced drug delivery and targeting. Drug targeting may enhance the efficiency of the treatment and, importantly, reduce unwanted side effects by decreasing drug distribution to non-target tissues. Liposomes are biocompatible lipid-based carriers that have been studied for drug delivery during the last 40 years. They can be functionalized with targeting ligands and sensing materials for triggered activation. In this study, various external signal-assisted liposomal delivery systems were developed. Signals can be used to modulate drug permeation or release from the liposome formulation, and they provide accurate control of time, place and rate of activation. The study involved three types of signals that were used to trigger drug permeation and release: electricity, heat and light. Electrical stimulus was utilized to enhance the permeation of liposomal DNA across the skin. Liposome/DNA complex-mediated transfections were performed in tight rat epidermal cell model. Various transfection media and current intensities were tested, and transfection efficiency was evaluated non-invasively by monitoring the concentration of secreted reporter protein in cell culture medium. Liposome/DNA complexes produced gene expression, but electrical stimulus did not enhance the transfection efficiency significantly. Heat-sensitive liposomal drug delivery system was developed by coating liposomes with biodegradable and thermosensitive poly(N-(2-hydroxypropyl) methacrylamide-mono/dilactate polymer. Temperature-triggered liposome aggregation and contents release from liposomes were evaluated. The cloud point temperature (CP) of the polymer was set to 42 °C. Polymer-coated liposome aggregation and contents release were observed above CP of the polymer, while non-coated liposomes remained intact. Polymer precipitates above its CP and interacts with liposomal bilayers. It is likely that this induces permeabilization of the liposomal membrane and contents release. Light-sensitivity was introduced to liposomes by incorporation of small (< 5 nm) gold nanoparticles. Hydrophobic and hydrophilic gold nanoparticles were embedded in thermosensitive liposomes, and contents release was investigated upon UV light exposure. UV light-induced lipid phase transitions were examined with small angle X-ray scattering, and light-triggered contents release was shown also in human retinal pigment epithelial cell line. Gold nanoparticles absorb light energy and transfer it into heat, which induces phase transitions in liposomes and triggers the contents release. In conclusion, external signal-activated liposomes offer an advanced platform for numerous applications in drug delivery, particularly in the localized drug delivery. Drug release may be localized to the target site with triggering stimulus that results in better therapeutic response and less adverse effects. Triggering signal and mechanism of activation can be selected according to a specific application.
Resumo:
Valko- ja ruskolahosienet tunnetaan luonnossa tehokkaimpina puun ja karikkeen lignoselluloosan lahottajina. Valkolahosienet pystyvät hajottamaan kaikkia puun osia: ligniiniä, selluloosaa ja hemiselluloosaa. Selektiivisesti ligniiniä hajottavat sienet lahottavat puusta suhteessa enemmän vaikeasti hajoavaa ligniiniä kuin selluloosaa tai hemiselluloosaa, jolloin jäljelle jää valkoista ja miltei puhdasta selluloosaa. Bioteknisissä sovelluksissa juuri selektiviiviset valkolahottajat ovat kiinnostavia. Niiden avulla voidaan puuhaketta esikäsitellä esimerkiksi paperinvalmistuksessa haitallisen ligniinin poistamiseksi. Ruskolahosienet ovat huomattavia puun, puutavaran ja puisten rakenteiden lahottajia, kuten tässä työssä käytetty Gloeophyllum trabeum (saunasieni ) ja Poria (Postia) placenta (istukkakääpä). Ruskolahosienet hajottavat puusta hemiselluloosan lisäksi selluloosaa, jolloin jää jäljelle ruskea ja jauhomaiseksi mureneva ligniini. Ruskolahosienet muovaavat ligniiniä jonkin verran. Kahden ruskolahosienen G. trabeumin ja P. placentan lisäksi tutkittiin valkolahosieniä, joista Ceriporiopsis subvermispora (karstakääpä) ja harvinainen Physisporinus rivulosus -sieni (talikääpä) hajottavat ligniiniä erittäin selektiivisesti. Phanerochaete chrysosporium on kaikkialla paljon tutkittu sieni, ja Phlebia radiata valkolahosientä (rusorypykkä) on tutkittu paljon mikrobiologian osastolla. Lisäksi tutkittiin Phlebia tremellosa -sienten (hytyrypykkä) ligninolyyttisten entsyymien tuottoa ja 14C-leimatun synteettisen ligniinin (DHP) hajotusta. P. radiata ja P. tremellosa -sienten on todettu aiemmin hajottavan ligniiniä selektiivisesti. Työssä selvitettiin miten sienten kasvua voi mitata, miten vertailukelpoisia eri mittaamismenetelmillä saadut tulokset ovat ja ilmenevätkö sienten aktiivisimmat kasvuvaiheet samaan aikaan eri menetelmillä mitattuna. Tärkeimmät tulokset olivat seuraavat havainnot: (i) P. radiata ja P. tremellosa -sienikannat tuottivat ligniini- ja mangaaniperoksidaasientsyymejä (LiP ja MnP) sekä lakkaasia, ja sienistä puhdistettiin 2-3 LiP- ja P. radiatasta yksi MnP-entsyymi; (ii) P. tremellosa -sienet hajottivat leimattua synteettistä ligniiniä (DHP) yhtä hyvin kuin paljon tutkitut P. chrysosporium ja P. radiata -sienet; (iii) puu, sienen luonnollinen kasvualusta, lisäsi valkolaho- ja ruskolahosienten demetoksylaatiota [O14CH3]-leimatusta ligniinin malliyhdisteestä 14CO2:ksi ilman puuta olleeseen alustaan verrattuna; (iv) demetoksylaatio (14CO2:n tuotto) oli normaalissa ilma-atmosfäärissä useimmiten parempi happeen verrattuna; (v) hapessa paras 14CO2:n tuotto saatiin puupalakasvatuksissa, joihin oli lisätty ravinnetyppeä tai typen lisäksi glukoosia sekä valkolaho- että ruskolahosienillä; (vi) ilmassa 14CO2:n tuotto oli puulla voimakkainta valkolahosienillä ilman lisäravinteita, kun taas G. trabeum -sienellä se oli yhtä hyvä eri alustoissa; (vii) biomassan muodostuminen rihmastojen ergosterolipitoisuuksista mitattuna oli ruskolahosienillä parempi kuin valkolahosienillä; (viii) ja biomassojen huippupitoisuudet olivat 6:lla sienellä eri suuruisia ja niiden maksimimäärien ajankohdat vaihtelivat viiden viikon kasvatusten kuluessa. Mikrobiologian osastolla Viikissä eristetty ja paljon tutkittu P. radiata -valkolahosieni oli mukana kaikissa tehdyissä kokeissa. Sienen LiP-aktiivisuus ja 14CO2:n tuotto 14C-rengas-leimatusta synteettisestä ligniinistä (DHP) korreloivat erittäin hyvin. Biomassan muodostuminen ergosterolilla määritettynä tuki hyvin entsyymiaktiivisuusmittauksilla ja isotooppikasvatuksilla saatuja tuloksia.
Resumo:
Contamination of urban streams is a rising topic worldwide, but the assessment and investigation of stormwater induced contamination is limited by the high amount of water quality data needed to obtain reliable results. In this study, stream bed sediments were studied to determine their contamination degree and their applicability in monitoring aquatic metal contamination in urban areas. The interpretation of sedimentary metal concentrations is, however, not straightforward, since the concentrations commonly show spatial and temporal variations as a response to natural processes. The variations of and controls on metal concentrations were examined at different scales to increase the understanding of the usefulness of sediment metal concentrations in detecting anthropogenic metal contamination patterns. The acid extractable concentrations of Zn, Cu, Pb and Cd were determined from the surface sediments and water of small streams in the Helsinki Metropolitan region, southern Finland. The data consists of two datasets: sediment samples from 53 sites located in the catchment of the Stream Gräsanoja and sediment and water samples from 67 independent catchments scattered around the metropolitan region. Moreover, the sediment samples were analyzed for their physical and chemical composition (e.g. total organic carbon, clay-%, Al, Li, Fe, Mn) and the speciation of metals (in the dataset of the Stream Gräsanoja). The metal concentrations revealed that the stream sediments were moderately contaminated and caused no immediate threat to the biota. However, at some sites the sediments appeared to be polluted with Cu or Zn. The metal concentrations increased with increasing intensity of urbanization, but site specific factors, such as point sources, were responsible for the occurrence of the highest metal concentrations. The sediment analyses revealed, thus a need for more detailed studies on the processes and factors that cause the hot spot metal concentrations. The sediment composition and metal speciation analyses indicated that organic matter is a very strong indirect control on metal concentrations, and it should be accounted for when studying anthropogenic metal contamination patterns. The fine-scale spatial and temporal variations of metal concentrations were low enough to allow meaningful interpretation of substantial metal concentration differences between sites. Furthermore, the metal concentrations in the stream bed sediments were correlated with the urbanization of the catchment better than the total metal concentrations in the water phase. These results suggest that stream sediments show true potential for wider use in detecting the spatial differences in metal contamination of urban streams. Consequently, using the sediment approach regional estimates of the stormwater related metal contamination could be obtained fairly cost-effectively, and the stability and reliability of results would be higher compared to analyses of single water samples. Nevertheless, water samples are essential in analysing the dissolved concentrations of metals, momentary discharges from point sources in particular.
Resumo:
The Taita Hills in southeastern Kenya form the northernmost part of Africa’s Eastern Arc Mountains, which have been identified by Conservation International as one of the top ten biodiversity hotspots on Earth. As with many areas of the developing world, over recent decades the Taita Hills have experienced significant population growth leading to associated major changes in land use and land cover (LULC), as well as escalating land degradation, particularly soil erosion. Multi-temporal medium resolution multispectral optical satellite data, such as imagery from the SPOT HRV, HRVIR, and HRG sensors, provides a valuable source of information for environmental monitoring and modelling at a landscape level at local and regional scales. However, utilization of multi-temporal SPOT data in quantitative remote sensing studies requires the removal of atmospheric effects and the derivation of surface reflectance factor. Furthermore, for areas of rugged terrain, such as the Taita Hills, topographic correction is necessary to derive comparable reflectance throughout a SPOT scene. Reliable monitoring of LULC change over time and modelling of land degradation and human population distribution and abundance are of crucial importance to sustainable development, natural resource management, biodiversity conservation, and understanding and mitigating climate change and its impacts. The main purpose of this thesis was to develop and validate enhanced processing of SPOT satellite imagery for use in environmental monitoring and modelling at a landscape level, in regions of the developing world with limited ancillary data availability. The Taita Hills formed the application study site, whilst the Helsinki metropolitan region was used as a control site for validation and assessment of the applied atmospheric correction techniques, where multiangular reflectance field measurements were taken and where horizontal visibility meteorological data concurrent with image acquisition were available. The proposed historical empirical line method (HELM) for absolute atmospheric correction was found to be the only applied technique that could derive surface reflectance factor within an RMSE of < 0.02 ps in the SPOT visible and near-infrared bands; an accuracy level identified as a benchmark for successful atmospheric correction. A multi-scale segmentation/object relationship modelling (MSS/ORM) approach was applied to map LULC in the Taita Hills from the multi-temporal SPOT imagery. This object-based procedure was shown to derive significant improvements over a uni-scale maximum-likelihood technique. The derived LULC data was used in combination with low cost GIS geospatial layers describing elevation, rainfall and soil type, to model degradation in the Taita Hills in the form of potential soil loss, utilizing the simple universal soil loss equation (USLE). Furthermore, human population distribution and abundance were modelled with satisfactory results using only SPOT and GIS derived data and non-Gaussian predictive modelling techniques. The SPOT derived LULC data was found to be unnecessary as a predictor because the first and second order image texture measurements had greater power to explain variation in dwelling unit occurrence and abundance. The ability of the procedures to be implemented locally in the developing world using low-cost or freely available data and software was considered. The techniques discussed in this thesis are considered equally applicable to other medium- and high-resolution optical satellite imagery, as well the utilized SPOT data.
Resumo:
Farmland bird species have been declining in Europe. Many declines have coincided with general intensification of farming practices. In Finland, replacement of mixed farming, including rotational pastures, with specialized cultivation has been one of the most drastic changes from the 1960s to the 1990s. This kind of habitat deterioration limits the persistence of populations, as has been previously indicated from local populations. Integrated population monitoring, which gathers species-specific information of population size and demography, can be used to assess the response of a population to environment changes also at a large spatial scale. I targeted my analysis at the Finnish starling (Sturnus vulgaris). Starlings are common breeders in farmland habitats, but severe declines of local populations have been reported from Finland in the 1970s and 1980s and later from other parts of Europe. Habitat deterioration (replacement of pasture and grassland habitats with specialized cultivation areas) limits reproductive success of the species. I analysed regional population data in order to exemplify the importance of agricultural change to bird population dynamics. I used nestling ringing and nest-card data from 1951 to 2005 in order to quantify population trends and per capita reproductive success within several geographical regions (south/north and west/east aspects). I used matrix modelling, acknowledging age-specific survival and fecundity parameters and density-dependence, to model population dynamics. Finnish starlings declined by 80% from the end of the 1960s up to the end of the 1980s. The observed patterns and the model indicated that the population decline was due to the decline of the carrying capacity of farmland habitats. The decline was most severe in north Finland where populations largely become extinct. However, habitat deterioration was most severe in the southern breeding areas. The deteriorations in habitat quality decreased reproduction, which finally caused the decline. I suggest that poorly-productive northern populations have been partly maintained by immigration from the highly-productive southern populations. As the southern populations declined, ceasing emigration caused the population extinction in north. This phenomenon was explained with source sink population dynamics, which I structured and verified on the basis of a spatially explicit simulation model. I found that southern Finnish starling population exhibits ten-year cyclic regularity, a phenomenon that can be explained with delayed density-dependence in reproduction.
Resumo:
Bioremediation, which is the exploitation of the intrinsic ability of environmental microbes to degrade and remove harmful compounds from nature, is considered to be an environmentally sustainable and cost-effective means for environmental clean-up. However, a comprehensive understanding of the biodegradation potential of microbial communities and their response to decontamination measures is required for the effective management of bioremediation processes. In this thesis, the potential to use hydrocarbon-degradative genes as indicators of aerobic hydrocarbon biodegradation was investigated. Small-scale functional gene macro- and microarrays targeting aliphatic, monoaromatic and low molecular weight polyaromatic hydrocarbon biodegradation were developed in order to simultaneously monitor the biodegradation of mixtures of hydrocarbons. The validity of the array analysis in monitoring hydrocarbon biodegradation was evaluated in microcosm studies and field-scale bioremediation processes by comparing the hybridization signal intensities to hydrocarbon mineralization, real-time polymerase chain reaction (PCR), dot blot hybridization and both chemical and microbiological monitoring data. The results obtained by real-time PCR, dot blot hybridization and gene array analysis were in good agreement with hydrocarbon biodegradation in laboratory-scale microcosms. Mineralization of several hydrocarbons could be monitored simultaneously using gene array analysis. In the field-scale bioremediation processes, the detection and enumeration of hydrocarbon-degradative genes provided important additional information for process optimization and design. In creosote-contaminated groundwater, gene array analysis demonstrated that the aerobic biodegradation potential that was present at the site, but restrained under the oxygen-limited conditions, could be successfully stimulated with aeration and nutrient infiltration. During ex situ bioremediation of diesel oil- and lubrication oil-contaminated soil, the functional gene array analysis revealed inefficient hydrocarbon biodegradation, caused by poor aeration during composting. The functional gene array specifically detected upper and lower biodegradation pathways required for complete mineralization of hydrocarbons. Bacteria representing 1 % of the microbial community could be detected without prior PCR amplification. Molecular biological monitoring methods based on functional genes provide powerful tools for the development of more efficient remediation processes. The parallel detection of several functional genes using functional gene array analysis is an especially promising tool for monitoring the biodegradation of mixtures of hydrocarbons.
Resumo:
Viral infections caused by herpesviruses are common complications after organ transplantation and they are associated with substantial morbidity and even mortality. Herpesviruses remain in a latent state in a host after primary infection and may reactivate later. CMV infection is the most important viral infection after liver transplantation. Less is known about the significance of human herpesvirus-6 (HHV-6). EBV is believed to play a major role in the development of post-transplant lymphoproliferative disorders (PTLD). The aim of this study was to investigate the CMV-, EBV- and HHV-6 DNAemia after liver transplantation by frequent monitoring of adult liver transplant patients. The presence of CMV, EBV and HHV-6 DNA were demonstrated by in situ hybridization assays and by real-time PCR methods from peripheral blood specimens. CMV and HHV-6 antigens were demonstrated by antigenemia assays and compared to the viral DNAemia. The response to antiviral therapy was also investigated. CMV-DNAemia appeared earlier than CMV pp65-antigenemia after liver transplantation. CMV infections were treated with ganciclovir. However, most of the treated patients demonstrated persistence of CMV-DNA for up to several months. Continuous CMV-DNA expression of peripheral blood leukocytes showed that the virus is not eliminated by ganciclovir and recurrences can be expected during several months after liver transplantation. HHV-6 DNAemia / antigenemia was common and occurred usually within the first three months after liver transplantation together with CMV. The HHV-6 DNA expression in peripheral blood mononuclear cells correlated well with HHV-6 antigenemia. Antiviral treatment significantly decreased the number of HHV-6 DNA positive cells, demonstrating the response to ganciclovir treatment. Clinically silent EBV reactivations with low viral loads were relatively common after liver transplantation. These EBV-DNAemias usually appeared within the first three months after liver transplantation together with betaherpesviruses (CMV, HHV-6, HHV-7). One patient developed high EBV viral loads and developed PTLD. These results indicate that frequent monitoring of EBV-DNA levels can be useful to detect liver transplant patients at risk of developing PTLD.
Resumo:
This study is one part of a collaborative depression research project, the Vantaa Depression Study (VDS), involving the Department of Mental and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry of the Peijas Medical Care District (PMCD), Vantaa, Finland. The VDS includes two parts, a record-based study consisting of 803 patients, and a prospective, naturalistic cohort study of 269 patients. Both studies include secondary-level care psychiatric out- and inpatients with a new episode of major depressive disorder (MDD). Data for the record-based part of the study came from a computerised patient database incorporating all outpatient visits as well as treatment periods at the inpatient unit. We included all patients aged 20 to 59 years old who had been assigned a clinical diagnosis of depressive episode or recurrent depressive disorder according to the International Classification of Diseases, 10th edition (ICD-10) criteria and who had at least one outpatient visit or day as an inpatient in the PMCD during the study period January 1, 1996, to December 31, 1996. All those with an earlier diagnosis of schizophrenia, other non-affective psychosis, or bipolar disorder were excluded. Patients treated in the somatic departments of Peijas Hospital and those who had consulted but not received treatment from the psychiatric consultation services were excluded. The study sample comprised 290 male and 513 female patients. All their psychiatric records were reviewed and each patient completed a structured form with 57 items. The treatment provided was reviewed up to the end of the depression episode or to the end of 1997. Most (84%) of the patients received antidepressants, including a minority (11%) on treatment with clearly subtherapeutic low doses. During the treatment period the depressed patients investigated averaged only a few visits to psychiatrists (median two visits), but more to other health professionals (median seven). One-fifth of both genders were inpatients, with a mean of nearly two inpatient treatment periods during the overall treatment period investigated. The median length of a hospital stay was 2 weeks. Use of antidepressants was quite conservative: The first antidepressant had been switched to another compound in only about one-fifth (22%) of patients, and only two patients had received up to five antidepressant trials. Only 7% of those prescribed any antidepressant received two antidepressants simultaneously. None of the patients was prescribed any other augmentation medication. Refusing antidepressant treatment was the most common explanation for receiving no antidepressants. During the treatment period, 19% of those not already receiving a disability pension were granted one due to psychiatric illness. These patients were nearly nine years older than those not pensioned. They were also more severely ill, made significantly more visits to professionals and received significantly more concomitant medications (hypnotics, anxiolytics, and neuroleptics) than did those receiving no pension. In the prospective part of the VDS, 806 adult patients were screened (aged 20-59 years) in the PMCD for a possible new episode of DSM-IV MDD. Of these, 542 patients were interviewed face-to-face with the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.0. Exclusion criteria were the same as in the record-based part of the VDS. Of these, 542 269 patients fulfiled the criteria of DSM-IV MDE. This study investigated factors associated with patients' functional disability, social adjustment, and work disability (being on sick-leave or being granted a disability pension). In the beginning of the treatment the most important single factor associated with overall social and functional disability was found to be severity of depression, but older age and personality disorders also significantly contributed. Total duration and severity of depression, phobic disorders, alcoholism, and personality disorders all independently contributed to poor social adjustment. Of those who were employed, almost half (43%) were on sick-leave. Besides severity and number of episodes of depression, female gender and age over 50 years strongly and independently predicted being on sick-leave. Factors influencing social and occupational disability and social adjustment among patients with MDD were studied prospectively during an 18-month follow-up period. Patients' functional disability and social adjustment were alleviated during the follow-up concurrently with recovery from depression. The current level of functioning and social adjustment of a patient with depression was predicted by severity of depression, recurrence before baseline and during follow-up, lack of full remission, and time spent depressed. Comorbid psychiatric disorders, personality traits (neuroticism), and perceived social support also had a significant influence. During the 18-month follow-up period, of the 269, 13 (5%) patients switched to bipolar disorder, and 58 (20%) dropped out. Of the 198, 186 (94%) patients were at baseline not pensioned, and they were investigated. Of them, 21 were granted a disability pension during the follow-up. Those who received a pension were significantly older, more seldom had vocational education, and were more often on sick-leave than those not pensioned, but did not differ with regard to any other sociodemographic or clinical factors. Patients with MDD received mostly adequate antidepressant treatment, but problems existed in treatment intensity and monitoring. It is challenging to find those at greatest risk for disability and to provide them adequate and efficacious treatment. This includes great challenges to the whole society to provide sufficient resources.
Resumo:
Mediastinitis as a complication after cardiac surgery is rare but disastrous increasing the hospital stay, hospital costs, morbidity and mortality. It occurs in 1-3 % of patients after median sternotomy. The purpose of this study was to find out the risk factors and also to investigate new ways to prevent mediastinitis. First, we assessed operating room air contamination monitoring by comparing the bacteriological technique with continuous particle counting in low level contamination achieved by ultra clean garment options in 66 coronary artery bypass grafting operations. Second, we examined surgical glove perforations and the changes in bacterial flora of surgeons' fingertips in 116 open-heart operations. Third, the effect of gentamicin-collagen sponge on preventing surgical site infections (SSI) was studied in randomized controlled study with 557 participants. Finally, incidence, outcome, and risk factors of mediastinitis were studied in over 10,000 patients. With the alternative garment and textile system (cotton group and clean air suit group), the air counts fell from 25 to 7 colony-forming units/m3 (P<0.01). The contamination of the sternal wound was reduced by 46% and that of the leg wound by >90%. In only 17% operations both gloves were found unpunctured. Frequency of glove perforations and bacteria counts of hands were found to increase with operation time. With local gentamicin prophylaxis slightly less SSIs (4.0 vs. 5.9%) and mediastinitis (1.1 vs. 1.9%) occurred. We identified 120/10713 cases of postoperative mediastinitis (1.1%). During the study period, the patient population grew significantly older, the proportion of women and patients with ASA score >3 increased significantly. In multivariate logistic regression analysis, the only significant predictor for mediastinitis was obesity. Continuous particle monitoring is a good intraoperative method to control the air contamination related to the theatre staff behavior during individual operation. When a glove puncture is detected, both gloves are to be changed. Before donning a new pair of gloves, the renewed disinfection of hands will help to keep their bacterial counts lower even towards the end of long operation. Gentamicin-collagen sponge may have beneficial effects on the prevention of SSI, but further research is needed. Mediastinitis is not diminishing. Larger populations at risk, for example proportions of overweight patients, reinforce the importance of surveillance and pose a challenge in focusing preventive measures.
Resumo:
Background: The fecal neutrophil-derived proteins calprotectin and lactoferrin have proven useful surrogate markers of intestinal inflammation. The aim of this study was to compare fecal calprotectin and lactoferrin concentrations to clinically, endoscopically, and histologically assessed Crohn’s disease (CD) activity, and to explore the suitability of these proteins as surrogate markers of mucosal healing during anti-TNFα therapy. Furthermore, we studied changes in the number and expression of effector and regulatory T cells in bowel biopsy specimens during anti-TNFα therapy. Patients and methods: Adult CD patients referred for ileocolonoscopy (n=106 for 77 patients) for various reasons were recruited (Study I). Clinical disease activity was assessed with the Crohn’s disease activity index (CDAI) and endoscopic activity with both the Crohn’s disease index of severity (CDEIS) and the simple endoscopic score for Crohn’s disease (SES-CD). Stool samples for measurements of calprotectin and lactoferrin, and blood samples for CRP were collected. For Study II, biopsy specimens were obtained from the ileum and the colon for histologic activity scoring. In prospective Study III, after baseline ileocolonoscopy, 15 patients received induction with anti-TNFα blocking agents and endoscopic, histologic, and fecal-marker responses to therapy were evaluated at 12 weeks. For detecting changes in the number and expression of effector and regulatory T cells, biopsy specimens were taken from the most severely diseased lesions in the ileum and the colon (Study IV). Results: Endoscopic scores correlated significantly with fecal calprotectin and lactoferrin (p<0.001). Both fecal markers were significantly lower in patients with endoscopically inactive than with active disease (p<0.001). In detecting endoscopically active disease, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for calprotectin ≥200 μg/g were 70%, 92%, 94%, and 61%; for lactoferrin ≥10 μg/g they were 66%, 92%, 94%, and 59%. Accordingly, the sensitivity, specificity, PPV, and NPV for CRP >5 mg/l were 48%, 91%, 91%, and 48%. Fecal markers were significantly higher in active colonic (both p<0.001) or ileocolonic (calprotectin p=0.028, lactoferrin p=0.004) than in ileal disease. In ileocolonic or colonic disease, colon histology score correlated significantly with fecal calprotectin (r=0.563) and lactoferrin (r=0.543). In patients receiving anti-TNFα therapy, median fecal calprotectin decreased from 1173 μg/g (range 88-15326) to 130 μg/g (13-1419) and lactoferrin from 105.0 μg/g (4.2-1258.9) to 2.7 μg/g (0.0-228.5), both p=0.001. The relation of ileal IL-17+ cells to CD4+ cells decreased significantly during anti-TNF treatment (p=0.047). The relation of IL-17+ cells to Foxp3+ cells was higher in the patients’ baseline specimens than in their post-treatment specimens (p=0.038). Conclusions: For evaluation of CD activity, based on endoscopic findings, more sensitive surrogate markers than CDAI and CRP were fecal calprotectin and lactoferrin. Fecal calprotectin and lactoferrin were significantly higher in endoscopically active disease than in endoscopic remission. In both ileocolonic and colonic disease, fecal markers correlated closely with histologic disease activity. In CD, these neutrophil-derived proteins thus seem to be useful surrogate markers of endoscopic activity. During anti-TNFα therapy, fecal calprotectin and lactoferrin decreased significantly. The anti-TNFα treatment was also reflected in a decreased IL-17/Foxp3 cell ratio, which may indicate improved balance between effector and regulatory T cells with treatment.