857 resultados para Mapping And Monitoring
Resumo:
Hereditary Leiomyomatosis and Renal Cell Cancer (HLRCC) is a hereditary tumour predisposition syndrome. Its phenotype includes benign cutaneous and uterine leiomyomas (CLM, ULM) with high penetrance and rarer renal cell cancer (RCC), most commonly of papillary type 2 subtype. Over 130 HLRCC families have been identified world-wide but the RCC phenotype seems to concentrate in families from Finland and North America for unknown reasons. HLRCC is caused by heterozygous germline mutations in the fumarate hydratase (FH) gene. FH encodes the enzyme fumarase from mitochondrial citric acid cycle. Fumarase enzyme activity or type or site of the FH mutation are unassociated with disease phenotype. The strongest evidence for tumourigenesis mechanism in HLRCC supports a hypoxia inducible factor driven process called pseudohypoxia resulting from accumulation of the fumarase substrate fumarate. In this study, to assess the importance of gene- or exon-level deletions or amplifications of FH in patients with HLRCC-associated phenotypes, multiplex ligation-dependent probe amplification (MLPA) method was used. One novel FH mutation, deletion of exon 1, was found in a Swedish male patient with an evident HLRCC phenotype with CLM, RCC, and a family history of ULM and RCC. Six other patients with CLM and 12 patients with only RCC or uterine leiomyosarcoma (ULMS) remained FH mutation-negative. These results suggest that copy number aberrations of FH or its exons are an infrequent cause of HLRCC and that only co-occurrence of benign tumour types justifies FH-mutation screening in RCC or ULMS patients. Determination of the genomic profile of 11 HLRCC-associated RCCs from Finnish patients was performed by array comparative genomic hybridization. The most common copy number aberrations were gains of 2, 7, and 17 and losses of 13q12.3-q21.1, 14, 18, and X. When compared to aberrations of sporadic papillary RCCs, HLRCC-associated RCCs harboured a distinct DNA copy number profile and lacked many of the changes characterizing the sporadic RCCs. The findings suggest a divergent molecular pathway for tumourigenesis of papillary RCCs in HLRCC. In order to find a genetic modifier of RCC risk in HLRCC, genome-wide linkage and identical by descent (IBD) analysis studies were performed in Finnish HLRCC families with microsatellite marker mapping and SNP-array platforms. The linkage analysis identified only one locus of interest, the FH gene locus in 1q43, but no mutations were found in the genes of the region. IBD analysis yielded no convincing haplotypes shared by RCC patients. Although these results do not exclude the existence of a genetic modifier for RCC risk in HLRCC, they emphasize the role of FH mutations in the malignant tumourigenesis of HLRCC. To study the benign tumours in HLRCC, genome-wide DNA copy number and gene expression profiles of sporadic and HLRCC ULMs were defined with modern SNP- and gene-expression array platforms. The gene expression array suggests novel genes involved in FH-deficient ULM tumourigenesis and novel genes with putative roles in propagation of sporadic ULM. Both the gene expression and copy number profiles of HLRCC ULMs differed from those of sporadic ULMs indicating distinct molecular basis of the FH-deficient HLRCC tumours.
Resumo:
Increased sediment and nutrient losses resulting from unsustainable grazing management in the Burdekin River catchment are major threats to water quality in the Great Barrier Reef Lagoon. To test the effects of grazing management on soil and nutrient loss, five 1 ha mini-catchments were established in 1999 under different grazing strategies on a sedimentary landscape near Charters Towers. Reference samples were also collected from watercourses in the Burdekin catchment during major flow events.Soil and nutrient loss were relatively low across all grazing strategies due to a combination of good cover, low slope and low rainfall intensities. Total soil loss varied from 3 to 20 kg haˉ¹ per event while losses of N and P ranged from 10 to 1900 g haˉ¹ and from 1 to 71 g haˉ¹ per event respectively. Water quality of runoff was considered moderate across all strategies with relatively low levels of total suspended sediment (range: 8-1409 mg lˉ¹), total N (range: 101-4000 ug lˉ¹) and total P (range: 14-609 ug lˉ¹). However, treatment differences are likely to emerge with time as the impacts of the different grazing strategies on land condition become more apparent.Samples collected opportunistically from rivers and creeks during flow events displayed significantly higher levels of total suspended sediment (range: 10-6010 mg lˉ¹), total N (range: 650-6350 ug lˉ¹) and total P (range: 50-1500 ug lˉ¹) than those collected at the grazing trial. These differences can largely be attributed to variation in slope, geology and cover between the grazing trial and different catchments. In particular, watercourses draining hillier, grano-diorite landscapes with low cover had markedly higher sediment and nutrient loads compared to those draining flatter, sedimentary landscapes.These preliminary data suggest that on relatively flat, sedimentary landscapes, extensive cattle grazing is compatible with achieving water quality targets, provided high levels of ground cover are maintained. In contrast, sediment and nutrient loss under grazing on more erodable land types is cause for serious concern. Long-term empirical research and monitoring will be essential to quantify the impacts of changed land management on water quality in the spatially and temporally variable Burdekin River catchment.
Resumo:
Strawberries (Fragaria sp.) are adapted to diverse environmental conditions from the tropics to about 70ºN, so different responses to environmental conditions can be found. Most genotypes of garden strawberry (F. x ananassa Duch.) and woodland strawberry (F. vesca L.) are short-day (SD) plants that are induced to flowering by photoperiods under a critical limit, but also various photoperiod x temperature interactions can be found. In addition, continuously flowering everbearing (EB) genotypes are found. In addition to flowering, axillary bud differentiation in strawberry is regulated by photoperiod. In SD conditions, axillary buds differentiate to rosette-like structures called "branch crowns", whereas in long-day conditions (LD) they form runners, branches with 2 long internodes followed by a daughter plant (leaf rosette). The number of crown branches determines the yield of the plant, since inflorescences are formed from the apical meristems of the crown. Although axillary bud differentiation is an important developmental process in strawberries, its environmental and hormonal regulation has not been characterized in detail. Moreover, the genetic mechanisms underlying axillary bud differentiation and regulation of flowering time in these species are almost completely unresolved. These topics have been studied in this thesis in order to enhance strawberry research, cultivation and breeding. The results showed that 8-12 SD cycles suppressed runner initiation from the axillary buds of the garden strawberry cv. Korona with the concomitant induction of crown branching, and 3 weeks of SD was sufficient for the induction of flowering in the main crown. Furthermore, a second SD treatment given a few weeks after the first SD period can be used to induce flowering in the primary branch crowns and to induce the formation of secondary branches. Thus, artificial SD treatments effectively stimulate crown branching, providing one means for the increase of cropping (yield) potential in strawberry. It was also shown by growth regulation applications, quantitave hormone analysis and gene expression analysis that gibberellin (GA) is one of the key signals involved in the photoperiod control of shoot differentiation. The results indicate that photoperiod controls GA activity specifically in axillary buds, thereby determining bud fate. It was further shown that chemical control of GA biosynthesis by prohexadione-calcium can be utilized to prevent excessive runner formation and induce crown branching in strawberry fields. Moreover, ProCa increased berry yield up to 50%, showing that it is an easier and more applicable alternative to artificial SD treatments for controlling strawberry crown development and yield. Finally, flowering gene pathways in Fragaria were explored by searching for homologs of 118 Arabidopsis thaliana flowering-time genes. In total, 66 gene homologs were identified, and they distributed to all known flowering pathways, suggesting the presence of these pathways also in strawberry. Expression analysis of selected genes revealed that the mRNA of putative floral identity gene APETALA1 accumulated in the shoot apex of the EB genotype after the induction of flowering, whereas it was absent in vegetative SD genotype, indicating the usefulness of this gene product as the marker of floral initiation. The present data enables the further exploration of strawberry flowering pathways with genetic transformation, gene mapping and transcriptomics methods.
Resumo:
Objective To assess the impact of exercise referral schemes on physical activity and health outcomes. Design Systematic review and meta-analysis. Data sources Medline, Embase, PsycINFO, Cochrane Library, ISI Web of Science, SPORTDiscus, and ongoing trial registries up to October 2009. We also checked study references. Study selection - Design: randomised controlled trials or non-randomised controlled (cluster or individual) studies published in peer review journals. - Population: sedentary individuals with or without medical diagnosis. - Exercise referral schemes defined as: clear referrals by primary care professionals to third party service providers to increase physical activity or exercise, physical activity or exercise programmes tailored to individuals, and initial assessment and monitoring throughout programmes. - Comparators: usual care, no intervention, or alternative exercise referral schemes. Results Eight randomised controlled trials met the inclusion criteria, comparing exercise referral schemes with usual care (six trials), alternative physical activity intervention (two), and an exercise referral scheme plus a self determination theory intervention (one). Compared with usual care, follow-up data for exercise referral schemes showed an increased number of participants who achieved 90-150 minutes of physical activity of at least moderate intensity per week (pooled relative risk 1.16, 95% confidence intervals 1.03 to 1.30) and a reduced level of depression (pooled standardised mean difference −0.82, −1.28 to −0.35). Evidence of a between group difference in physical activity of moderate or vigorous intensity or in other health outcomes was inconsistent at follow-up. We did not find any difference in outcomes between exercise referral schemes and the other two comparator groups. None of the included trials separately reported outcomes in individuals with specific medical diagnoses. Substantial heterogeneity in the quality and nature of the exercise referral schemes across studies might have contributed to the inconsistency in outcome findings. Conclusions Considerable uncertainty remains as to the effectiveness of exercise referral schemes for increasing physical activity, fitness, or health indicators, or whether they are an efficient use of resources for sedentary people with or without a medical diagnosis.
Resumo:
- Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary-care setting. The GP or health-care professional then refers the patient to a third-party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the individual. - Objective To assess the clinical effectiveness and cost-effectiveness of ERS for people with a diagnosed medical condition known to benefit from physical activity (PA). The scope of this report was broadened to consider individuals without a diagnosed condition who are sedentary. - Data sources MEDLINE; EMBASE; PsycINFO; The Cochrane Library, ISI Web of Science; SPORTDiscus and ongoing trial registries were searched (from 1990 to October 2009) and included study references were checked. - Methods Systematic reviews: the effectiveness of ERS, predictors of ERS uptake and adherence, and the cost-effectiveness of ERS; and the development of a decision-analytic economic model to assess cost-effectiveness of ERS. - Results Seven randomised controlled trials (UK, n = 5; non-UK, n = 2) met the effectiveness inclusion criteria, five comparing ERS with usual care, two compared ERS with an alternative PA intervention, and one to an ERS plus a self-determination theory (SDT) intervention. In intention-to-treat analysis, compared with usual care, there was weak evidence of an increase in the number of ERS participants who achieved a self-reported 90-150 minutes of at least moderate-intensity PA per week at 6-12 months' follow-up [pooled relative risk (RR) 1.11, 95% confidence interval 0.99 to 1.25]. There was no consistent evidence of a difference between ERS and usual care in the duration of moderate/vigorous intensity and total PA or other outcomes, for example physical fitness, serum lipids, health-related quality of life (HRQoL). There was no between-group difference in outcomes between ERS and alternative PA interventions or ERS plus a SDT intervention. None of the included trials separately reported outcomes in individuals with medical diagnoses. Fourteen observational studies and five randomised controlled trials provided a numerical assessment of ERS uptake and adherence (UK, n = 16; non-UK, n = 3). Women and older people were more likely to take up ERS but women, when compared with men, were less likely to adhere. The four previous economic evaluations identified suggest ERS to be a cost-effective intervention. Indicative incremental cost per quality-adjusted life-year (QALY) estimates for ERS for various scenarios were based on a de novo model-based economic evaluation. Compared with usual care, the mean incremental cost for ERS was £169 and the mean incremental QALY was 0.008, with the base-case incremental cost-effectiveness ratio at £20,876 per QALY in sedentary people without a medical condition and a cost per QALY of £14,618 in sedentary obese individuals, £12,834 in sedentary hypertensive patients, and £8414 for sedentary individuals with depression. Estimates of cost-effectiveness were highly sensitive to plausible variations in the RR for change in PA and cost of ERS. - Limitations We found very limited evidence of the effectiveness of ERS. The estimates of the cost-effectiveness of ERS are based on a simple analytical framework. The economic evaluation reports small differences in costs and effects, and findings highlight the wide range of uncertainty associated with the estimates of effectiveness and the impact of effectiveness on HRQoL. No data were identified as part of the effectiveness review to allow for adjustment of the effect of ERS in different populations. - Conclusions There remains considerable uncertainty as to the effectiveness of ERS for increasing activity, fitness or health indicators or whether they are an efficient use of resources in sedentary people without a medical diagnosis. We failed to identify any trial-based evidence of the effectiveness of ERS in those with a medical diagnosis. Future work should include randomised controlled trials assessing the cinical effectiveness and cost-effectivenesss of ERS in disease groups that may benefit from PA. - Funding The National Institute for Health Research Health Technology Assessment programme.
Resumo:
Circulating tumor cells (CTCs) are the seeds for cancer metastases development, which is responsible for >90% of cancer-related deaths. Accurate quantification of CTCs in human fluids could be an invaluable tool for understanding cancer prognosis, delivering personalized medicine to prevent metastasis and finding cancer therapy effectiveness. Although CTCs were first discovered more than 200 years ago, until now it has been a nightmare for clinical practitioners to capture and diagnose CTCs in clinical settings. Our society needs rapid, sensitive, and reliable assays to identify the CTCs from blood in order to help save millions of lives. Due to the phenotypic EMT transition, CTCs are undetected for more than one-third of metastatic breast cancer patients in clinics. To tackle the above challenges, the first volume in “Circulating Tumor Cells (CTCs): Detection Methods, Health Impact and Emerging Clinical Challenges discusses recent developments of different technologies, which have the capability to target and elucidate the phenotype heterogenity of CTCS. It contains seven chapters written by world leaders in this area, covering basic science to possible device design which can have beneficial applications in society. This book is unique in its design and content, providing an in-depth analysis to elucidate biological mechanisms of cancer disease progression, CTC detection challenges, possible health effects and the latest research on evolving technologies which have the capability to tackle the above challenges. It describes the broad range of coverage on understanding CTCs biology from early predictors of the metastatic spread of cancer, new promising technology for CTC separation and detection in clinical environment and monitoring therapy efficacy via finding the heterogeneous nature of CTCs. (Imprint: Nova Biomedical)
Resumo:
This paper presents a new numerical integration technique oil arbitrary polygonal domains. The polygonal domain is mapped conformally to the unit disk using Schwarz-Christoffel mapping and a midpoint quadrature rule defined oil this unit disk is used. This method eliminates the need for a two-level isoparametric mapping Usually required. Moreover, the positivity of the Jacobian is guaranteed. Numerical results presented for a few benchmark problems in the context of polygonal finite elements show that the proposed method yields accurate results.
Resumo:
This study is one part of a collaborative depression research project, the Vantaa Depression Study (VDS), involving the Department of Mental and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry of the Peijas Medical Care District (PMCD), Vantaa, Finland. The VDS includes two parts, a record-based study consisting of 803 patients, and a prospective, naturalistic cohort study of 269 patients. Both studies include secondary-level care psychiatric out- and inpatients with a new episode of major depressive disorder (MDD). Data for the record-based part of the study came from a computerised patient database incorporating all outpatient visits as well as treatment periods at the inpatient unit. We included all patients aged 20 to 59 years old who had been assigned a clinical diagnosis of depressive episode or recurrent depressive disorder according to the International Classification of Diseases, 10th edition (ICD-10) criteria and who had at least one outpatient visit or day as an inpatient in the PMCD during the study period January 1, 1996, to December 31, 1996. All those with an earlier diagnosis of schizophrenia, other non-affective psychosis, or bipolar disorder were excluded. Patients treated in the somatic departments of Peijas Hospital and those who had consulted but not received treatment from the psychiatric consultation services were excluded. The study sample comprised 290 male and 513 female patients. All their psychiatric records were reviewed and each patient completed a structured form with 57 items. The treatment provided was reviewed up to the end of the depression episode or to the end of 1997. Most (84%) of the patients received antidepressants, including a minority (11%) on treatment with clearly subtherapeutic low doses. During the treatment period the depressed patients investigated averaged only a few visits to psychiatrists (median two visits), but more to other health professionals (median seven). One-fifth of both genders were inpatients, with a mean of nearly two inpatient treatment periods during the overall treatment period investigated. The median length of a hospital stay was 2 weeks. Use of antidepressants was quite conservative: The first antidepressant had been switched to another compound in only about one-fifth (22%) of patients, and only two patients had received up to five antidepressant trials. Only 7% of those prescribed any antidepressant received two antidepressants simultaneously. None of the patients was prescribed any other augmentation medication. Refusing antidepressant treatment was the most common explanation for receiving no antidepressants. During the treatment period, 19% of those not already receiving a disability pension were granted one due to psychiatric illness. These patients were nearly nine years older than those not pensioned. They were also more severely ill, made significantly more visits to professionals and received significantly more concomitant medications (hypnotics, anxiolytics, and neuroleptics) than did those receiving no pension. In the prospective part of the VDS, 806 adult patients were screened (aged 20-59 years) in the PMCD for a possible new episode of DSM-IV MDD. Of these, 542 patients were interviewed face-to-face with the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.0. Exclusion criteria were the same as in the record-based part of the VDS. Of these, 542 269 patients fulfiled the criteria of DSM-IV MDE. This study investigated factors associated with patients' functional disability, social adjustment, and work disability (being on sick-leave or being granted a disability pension). In the beginning of the treatment the most important single factor associated with overall social and functional disability was found to be severity of depression, but older age and personality disorders also significantly contributed. Total duration and severity of depression, phobic disorders, alcoholism, and personality disorders all independently contributed to poor social adjustment. Of those who were employed, almost half (43%) were on sick-leave. Besides severity and number of episodes of depression, female gender and age over 50 years strongly and independently predicted being on sick-leave. Factors influencing social and occupational disability and social adjustment among patients with MDD were studied prospectively during an 18-month follow-up period. Patients' functional disability and social adjustment were alleviated during the follow-up concurrently with recovery from depression. The current level of functioning and social adjustment of a patient with depression was predicted by severity of depression, recurrence before baseline and during follow-up, lack of full remission, and time spent depressed. Comorbid psychiatric disorders, personality traits (neuroticism), and perceived social support also had a significant influence. During the 18-month follow-up period, of the 269, 13 (5%) patients switched to bipolar disorder, and 58 (20%) dropped out. Of the 198, 186 (94%) patients were at baseline not pensioned, and they were investigated. Of them, 21 were granted a disability pension during the follow-up. Those who received a pension were significantly older, more seldom had vocational education, and were more often on sick-leave than those not pensioned, but did not differ with regard to any other sociodemographic or clinical factors. Patients with MDD received mostly adequate antidepressant treatment, but problems existed in treatment intensity and monitoring. It is challenging to find those at greatest risk for disability and to provide them adequate and efficacious treatment. This includes great challenges to the whole society to provide sufficient resources.
Resumo:
Septic shock is a common killer in intensive care units (ICU). The most crucial issue concerning the outcome is the early and aggressive start of treatment aimed at normalization of hemodynamics and the early start of antibiotics during the very first hours. The optimal targets of hemodynamic treatment, or impact of hemodynamic treatment on survival after first resuscitation period are less known. The objective of this study was to evaluate different aspects of the hemodynamic pattern in septic shock with special attention to prediction of outcome. In particular components of early treatment and monitoring in the ICU were assessed. A total of 401 patients, 218 with septic shock and 192 with severe sepsis or septic shock were included in the study. The patients were treated in 24 Finnish ICUs during 1999-2005. 295 of the patients were included in the Finnish national epidemiologic Finnsepsis study. We found that the most important hemodynamic variables concerning the outcome were the mean arterial pressures (MAP) and lactate during the first six hours in ICU and the MAP and mixed venous oxygen saturation (SvO2) under 70% during first 48 hours. The MAP levels under 65 mmHg and SvO2 below 70% were the best predictive thresholds. Also the high central venous pressure (CVP) correlated to adverse outcome. We assessed the correlation and agreement of SvO2 and mean central venous oxygen saturation (ScvO2) in septic shock during first day in ICU. The mean SvO2 was below ScvO2 during early sepsis. Bias of difference was 4.2% (95% limits of agreement 8.1% to 16.5%) by Bland-Altman analysis. The difference between saturation values correlated significantly to cardiac index and oxygen delivery. Thus, the ScvO2 can not be used as a substitute of SvO2 in hemodynamic monitoring in ICU. Several biomarkers have been investigated for their ability to help in diagnosis or outcome prediction in sepsis. We assessed the predictive value of N-terminal pro brain natriuretic peptide (NT-proBNP) on mortality in severe sepsis or septic shock. The NT-proBNP levels were significantly higher in hospital nonsurvivors. The NT-proBNP 72 hrs after inclusion was independent predictor of hospital mortality. The acute cardiac load contributed to NTproBNP values at admission, but renal failure was the main confounding factor later. The accuracy of NT-proBNP, however, was not sufficient for clinical decision-making concerning the outcome prediction. The delays in start of treatment are associated to poorer prognosis in sepsis. We assessed how the early treatment guidelines were adopted, and what was the impact of early treatment on mortality in septic shock in Finland. We found that the early treatment was not optimal in Finnish hospitals and this reflected to mortality. A delayed initiation of antimicrobial agents was especially associated with unfavorable outcome.
Resumo:
The “distractor-frequency effect” refers to the finding that high-frequency (HF) distractor words slow picture naming less than low-frequency distractors in the picture–word interference paradigm. Rival input and output accounts of this effect have been proposed. The former attributes the effect to attentional selection mechanisms operating during distractor recognition, whereas the latter attributes it to monitoring/decision mechanisms operating on distractor and target responses in an articulatory buffer. Using high-density (128-channel) EEG, we tested hypotheses from these rival accounts. In addition to conducting stimulus- and response-locked whole-brain corrected analyses, we investigated the correct-related negativity, an ERP observed on correct trials at fronto-central electrodes proposed to reflect the involvement of domain general monitoring. The wholebrain ERP analysis revealed a significant effect of distractor frequency at inferior right frontal and temporal sites between 100 and 300-msec post-stimulus onset, during which lexical access is thought to occur. Response-locked, region of interest (ROI) analyses of fronto-central electrodes revealed a correct-related negativity starting 121 msec before and peaking 125 msec after vocal onset on the grand averages. Slope analysis of this component revealed a significant difference between HF and lowfrequency distractor words, with the former associated with a steeper slope on the time windowspanning from100 msec before to 100 msec after vocal onset. The finding of ERP effects in time windows and components corresponding to both lexical processing and monitoring suggests the distractor frequency effect is most likely associated with more than one physiological mechanism.
Resumo:
Bactrocera tryoni (Froggatt) is Australia's major horticultural insect pest, yet monitoring females remains logistically difficult. We trialled the ‘Ladd trap’ as a potential female surveillance or monitoring tool. This trap design is used to trap and monitor fruit flies in countries other (e.g. USA) than Australia. The Ladd trap consists of a flat yellow panel (a traditional ‘sticky trap’), with a three dimensional red sphere (= a fruit mimic) attached in the middle. We confirmed, in field-cage trials, that the combination of yellow panel and red sphere was more attractive to B. tryoni than the two components in isolation. In a second set of field-cage trials, we showed that it was the red-yellow contrast, rather than the three dimensional effect, which was responsible for the trap's effectiveness, with B. tryoni equally attracted to a Ladd trap as to a two-dimensional yellow panel with a circular red centre. The sex ratio of catches was approximately even in the field-cage trials. In field trials, we tested the traditional red-sphere Ladd trap against traps for which the sphere was painted blue, black or yellow. The colour of sphere did not significantly influence trap efficiency in these trials, despite the fact the yellow-panel/yellow-sphere presented no colour contrast to the flies. In 6 weeks of field trials, over 1500 flies were caught, almost exactly two-thirds of them being females. Overall, flies were more likely to be caught on the yellow panel than the sphere; but, for the commercial Ladd trap, proportionally more females were caught on the red sphere versus the yellow panel than would be predicted based on relative surface area of each component, a result also seen the field-cage trial. We determined that no modification of the trap was more effective than the commercially available Ladd trap and so consider that product suitable for more extensive field testing as a B. tryoni research and monitoring tool.
Resumo:
Koulujen kestävän kehityksen työllä tarkoitetaan ekologiseen, taloudelliseen, sosiaaliseen ja kulttuuriseen kestävyyteen pohjautuvaa ympäristökasvatusta. Helsingissä peruskoulujen kestävän kehityksen työvälineet (ympäristökartoitukset ja – ohjelmat) ovat perustuneet koulujen ympäristöasioiden suunnitteluun, opetussuunnitelmien toteutumiseen ja ylläpitotoimintoihin, kuten jätehuoltoon. Opetusvirasto käytti vuosina 2005 ja 2009 ympäristötoiminnan arvioinnissa ympäristötasokuvauksia 1-3, joista arvosana 3 kuvaa ympäristöasioissa edistyneintä koulua. Tutkielman tavoitteena on tutkia ympäristötoiminnan tasojen perusteella ryhmiteltyjen peruskoulujen välisiä eroja jätekustannuksissa ja – määrissä (euroa/henkilö ja kg/henkilö) ja löytää mahdollisesti eroihin vaikuttavia tekijöitä. Tutkielman toimeksiantajana on 4V-Välitä, vaikuta, viihdy, voi hyvin – hanke, jonka yhtenä toiminnan osa-alueena on koulujen kestävän kehityksen työ. Tuloksista tullaan johtamaan tietoa Opetusviraston sekä muiden tahojen, kuten HSY:n ja Palmian ympäristötyöhön sekä Kiinteistöviraston Tilakeskuksen hallinnassa olevien koulukiinteistöjen jätehuollon kehittämiseen. Tutkimusaineistoina käytettiin Helsingin peruskoulujen vuoden 2009 jätekustannuksia ja tutkielman yhteydessä kerätyn jäteseurannan tuloksia vuodelta 2010. Jätekustannus- ja jätemääräaineistot yhdistettiin vuoden 2009 ympäristötoiminnan tasoluokitusten perusteella otokseksi (n=64). Lopullinen jätekustannus- ja jätemääräanalyysi tehtiin 29 koulun otoksella, josta oli rajattu pois kiinteistöt, joilla on koulun toiminnan lisäksi muita käyttötarkoituksia. Analyysiin sisällytettiin myös tarkempi tarkastelu koulujen seka- ja biojätejakeiden kustannuksista ja määristä. Tutkimuksen johtopäätöksenä todettiin, että eri ympäristötasoisten peruskoulujen välillä on huomattavia euro- ja kilomääräisiä eroja henkilöä kohden lasketuissa jätekustannuksissa ja –määrissä. Kokonaisjätemäärässä ei ole tapahtunut merkittävää muutosta tarkasteluvuosien välillä, mutta lajittelu näyttäisi kuitenkin tehostuneen. Tulosten perusteella ympäristöasioissa edistyneiden tason 3 koulujen keskimääräiset sekajätemäärät ja -kustannukset olivat pienimmät tasojen 1 ja 2 kouluihin verrattuna. Biojätemäärät ja –kustannukset olivat suurimmat tason 2 kouluissa. Jätekustannuksiin ja – määriin näyttäisivät vaikuttavan jäteastioiden määrien, kokojen ja tyhjennysrytmien optimointi sekä jäteastioiden täyttöasteet. Peruskoulujen tulisi keskittyä kestävän kehityksen työn avulla jätteiden synnyn ehkäisyyn ja vähentämiseen, jotta jätekustannuksetkin vähentyisivät jätehuollon kehittämistoimenpiteiden seurauksena.
Resumo:
The literature review elucidates the mechanism of oxidation in proteins and amino acids and gives an overview of the detection and analysis of protein oxidation products as well as information about ?-lactoglobulin and studies carried out on modifications of this protein under certain conditions. The experimental research included the fractionation of the tryptic peptides of ?-lactoglobulin using preparative-HPLC-MS and monitoring the oxidation process of these peptides via reverse phase-HPLC-UV. Peptides chosen to be oxidized were selected with respect to their amino acid content which were susceptible to oxidation and fractionated according to their m/z values. These peptides were: IPAVFK (m/z 674), ALPMHIR (m/z 838), LIVTQTMK (m/z 934) and VLVLDTDYK (m/z 1066). Even though it was not possible to solely isolate the target peptides due to co-elution of various fractions, the percentages of target peptides in the samples were satisfactory to carry out the oxidation procedure. IPAVFK and VLVLDTDYK fractions were found to yield the oxidation products reviewed in literature, however, unoxidized peptides were still present in high amounts after 21 days of oxidation. The UV data at 260 and 280 nm enabled to monitor both the main peptides and the oxidation products due to the absorbance of aromatic side-chains these peptides possess. ALPMHIR and LIVTQTMK fractions were oxidatively consumed rapidly and oxidation products of these peptides were observed even on day 0. High rates of depletion of these peptides were acredited to the presence of His (H) and sulfur-containing side-chains of Met (M). In conclusion, selected peptides hold the potential to be utilized as marker peptides in ?-lactoglobulin oxidation.
Resumo:
Acute childhood osteomyelitis (OM), septic arthritis (SA), and their combination osteomyelitis with adjacent septic arthritis (OM+SA), are treated with long courses of antimicrobials and immediate surgery. We conducted a prospective multi-center randomized trial among Finnish children at age 3 months to 15 years in 1983-2005. According to the two-by-two factorial study design, children with OM or OM+SA received 20 or 30 days of antimicrobials, whereas those with SA were treated for 10 or 30 days. In addition, the whole series was randomized to be treated with clindamycin or a first-generation cephalosporin. Cases were included only if the causative agent was isolated. The treatment was instituted intravenously, but only for the first 2-4 days. Percutaneous aspiration was done to obtain a representative sample for bacteriology, but all other surgical intervention was kept at a minimum. A total of 265 patients fulfilled our strict inclusion criteria and were analyzed; 106 children had OM, 134 SA, and 25 OM+SA. In the OM group, one child in the long and one child in the short-term treatment group developed sequelae. One child with SA twice developed a late re-infection of the same joint, but the causative agents differed. Regarding surgery, diagnostic arthrocentesis or corticotomy was the only surgical procedure performed in most cases. Routine arthrotomy was not required even in hip arthritis. Serum C-reactive protein (CRP) proved to be a reliable laboratory index in the diagnosis and monitoring of osteoarticular infections. The recovery rate was similar regardless of whether clindamycin or a first-generation cephalosporin was used. We conclude that a course of 20 days of these well-absorbing antimicrobials is sufficient for OM or OM+SA, and 10 days for SA in most cases beyond the neonatal age. A short intravenous phase of only 2-5 days often suffices. CRP gives valuable information in monitoring the course of illness. Besides diagnostic aspiration, surgery should be reserved for selected cases.
Resumo:
Lung cancer accounts for more cancer-related deaths than any other cancer. In Finland, five-year survival ranges from 8% to 13%. The main risk factor for lung cancer is long-term cigarette smoking, but its carcinogenesis requires several other factors. The aim of the present study was to 1) evaluate post-operative quality of life, 2) compare clinical outcomes between minimally invasive and conventional open surgery, 3) evaluate the role of oxidative stress in the carcinogenesis of non-small lung cancer (NSCLC), and 4) to identify and characterise targeted agents for therapeutic and diagnostic use in surgery. For study I, pneumonectomy patients replied to 15D quality of life and baseline dyspnea questionnaires. Study III involved a prospective quality of life assessment using the 15D questionnaire after lobectomy or bi-lobectomy. Study IV was a retrospective comparison of clinical outcomes between 212 patients treated with open thoracotomy and 116 patients who underwent a minimally invasive technique. Study II measured parameters of oxidative metabolism (myeloperoxidase activity, glutathione content and NADPH oxidase activity) and DNA adducts. Study V employed the phage display method and identified a core motif for homing peptides. This method served in cell-binding, cell-localisation, and biodistribution studies. Following both pneumonectomy and lobectomy, NSCLC patients showed significantly decreased long-term quality of life. No significant correlation was noted between post-operative quality of life and pre-operative pulmonary function tests. Women suffered more from increased dyspnea after pneumonectomy which was absent after lobectomy or bi-lobectomy. Patients treated with video-assisted thoracoscopy showed significantly decreased morbidity and shorter periods of hospitalization than did open surgery patients. This improvement was achieved even though the VATS patients were older and suffered more comorbid conditions and poorer pulmonary function. No significant differences in survival were noted between these two groups. An increase in NADPH oxidase activity was noted in tumour samples of both adenocarcinoma and squamous cell carcinoma. This increase was independent from myeloperoxidase activity. Elevated glutathione content was noted in tumour tissue, especially in adenocarcinoma. After panning the clinical tumour samples with the phage display method, an amino acid sequence of ARRPKLD, the Thx, was chosen for further analysis. This method proved selective of tumour tissue in both in vitro and in vivo cell-binding assay, and biodistribution showed tumour accumulation. Because of the significantly reduced quality of life following pneumonectomy, other operative strategies should be implemented as an alternative (e.g. sleeve-lobectomy). To treat this disease, implementation of a minimally invasive surgical technique is safe, and the results showed decreased morbidity and a shorter period of hospitalisation than with thoracotomy. This technique may facilitate operative treatment of elderly patients with comorbid conditions who might otherwise be considered inoperable. Simultaneous exposure to oxidative stress and altered redox states indicates the important role of oxidative stress in the pathogenesis and malignant transformation of NSCLC. The studies showed with great specificity and with favourable biodistribution that Thx peptide is specific to NSCLC tumours. Thx thus shows promise in imaging, targeted therapy, and monitoring of treatment response.