21 resultados para cost-effective design

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

NMR spectroscopy enables the study of biomolecules from peptides and carbohydrates to proteins at atomic resolution. The technique uniquely allows for structure determination of molecules in solution-state. It also gives insights into dynamics and intermolecular interactions important for determining biological function. Detailed molecular information is entangled in the nuclear spin states. The information can be extracted by pulse sequences designed to measure the desired molecular parameters. Advancement of pulse sequence methodology therefore plays a key role in the development of biomolecular NMR spectroscopy. A range of novel pulse sequences for solution-state NMR spectroscopy are presented in this thesis. The pulse sequences are described in relation to the molecular information they provide. The pulse sequence experiments represent several advances in NMR spectroscopy with particular emphasis on applications for proteins. Some of the novel methods are focusing on methyl-containing amino acids which are pivotal for structure determination. Methyl-specific assignment schemes are introduced for increasing the size range of 13C,15N labeled proteins amenable to structure determination without resolving to more elaborate labeling schemes. Furthermore, cost-effective means are presented for monitoring amide and methyl correlations simultaneously. Residual dipolar couplings can be applied for structure refinement as well as for studying dynamics. Accurate methods for measuring residual dipolar couplings in small proteins are devised along with special techniques applicable when proteins require high pH or high temperature solvent conditions. Finally, a new technique is demonstrated to diminish strong-coupling induced artifacts in HMBC, a routine experiment for establishing long-range correlations in unlabeled molecules. The presented experiments facilitate structural studies of biomolecules by NMR spectroscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ongoing habitat loss and fragmentation threaten much of the biodiversity that we know today. As such, conservation efforts are required if we want to protect biodiversity. Conservation budgets are typically tight, making the cost-effective selection of protected areas difficult. Therefore, reserve design methods have been developed to identify sets of sites, that together represent the species of conservation interest in a cost-effective manner. To be able to select reserve networks, data on species distributions is needed. Such data is often incomplete, but species habitat distribution models (SHDMs) can be used to link the occurrence of the species at the surveyed sites to the environmental conditions at these locations (e.g. climatic, vegetation and soil conditions). The probability of the species occurring at unvisited location is next predicted by the model, based on the environmental conditions of those sites. The spatial configuration of reserve networks is important, because habitat loss around reserves can influence the persistence of species inside the network. Since species differ in their requirements for network configuration, the spatial cohesion of networks needs to be species-specific. A way to account for species-specific requirements is to use spatial variables in SHDMs. Spatial SHDMs allow the evaluation of the effect of reserve network configuration on the probability of occurrence of the species inside the network. Even though reserves are important for conservation, they are not the only option available to conservation planners. To enhance or maintain habitat quality, restoration or maintenance measures are sometimes required. As a result, the number of conservation options per site increases. Currently available reserve selection tools do however not offer the ability to handle multiple, alternative options per site. This thesis extends the existing methodology for reserve design, by offering methods to identify cost-effective conservation planning solutions when multiple, alternative conservation options are available per site. Although restoration and maintenance measures are beneficial to certain species, they can be harmful to other species with different requirements. This introduces trade-offs between species when identifying which conservation action is best applied to which site. The thesis describes how the strength of such trade-offs can be identified, which is useful for assessing consequences of conservation decisions regarding species priorities and budget. Furthermore, the results of the thesis indicate that spatial SHDMs can be successfully used to account for species-specific requirements for spatial cohesion - in the reserve selection (single-option) context as well as in the multi-option context. Accounting for the spatial requirements of multiple species and allowing for several conservation options is however complicated, due to trade-offs in species requirements. It is also shown that spatial SHDMs can be successfully used for gaining information on factors that drive a species spatial distribution. Such information is valuable to conservation planning, as better knowledge on species requirements facilitates the design of networks for species persistence. This methods and results described in this thesis aim to improve species probabilities of persistence, by taking better account of species habitat and spatial requirements. Many real-world conservation planning problems are characterised by a variety of conservation options related to protection, restoration and maintenance of habitat. Planning tools therefore need to be able to incorporate multiple conservation options per site, in order to continue the search for cost-effective conservation planning solutions. Simultaneously, the spatial requirements of species need to be considered. The methods described in this thesis offer a starting point for combining these two relevant aspects of conservation planning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ability to deliver the drug to the patient in a safe, efficacious and cost-effective manner depends largely on the physicochemical properties of the active pharmaceutical ingredient (API) in the solid state. In this context, crystallization is of critical importance in pharmaceutical industry, as it defines physical and powder properties of crystalline APIs. An improved knowledge of the various aspects of crystallization process is therefore needed. The overall goal of this thesis was to gain better understanding of the relationships between crystallization, solid-state form and properties of pharmaceutical solids with a focus on a crystal engineering approach to design technological properties of APIs. Specifically, solid-state properties of the crystalline forms of the model APIs, erythromycin A and baclofen, and the influence of solvent on their crystallization behavior were investigated. In addition, the physical phenomena associated with wet granulation and hot-melting processing of the model APIs were examined at the molecular level. Finally, the effect of crystal habit modification of a model API on its tabletting properties was evaluated. The thesis enabled the understanding of the relationship between the crystalline forms of the model APIs, which is of practical importance for solid-state control during processing and storage. Moreover, a new crystalline form, baclofen monohydrate, was discovered and characterized. Upon polymorph screening, erythromycin A demonstrated high solvate-forming propensity thus emphasizing the need for careful control of the solvent effects during formulation. The solvent compositions that yield the desirable crystalline form of erythromycin A were defined. Furthermore, new examples on solvent-mediated phase transformations taking place during wet granulation of baclofen and hot-melt processing of erythromycin A dihydrate with PEG 6000 are reported. Since solvent-mediated phase transformations involve the crystallization of a stable phase and hence affect the dissolution kinetics and possibly absorption of the API these transformations must be well documented. Finally, a controlled-crystallization method utilizing HPMC as a crystal habit modifier was developed for erythromycin A dihydrate. The crystals with modified habit were shown to posses improved compaction properties as compared with those of unmodified crystals. This result supports the idea of morphological crystal engineering as a tool for designing technological properties of APIs and is of utmost practical interest.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bioremediation, which is the exploitation of the intrinsic ability of environmental microbes to degrade and remove harmful compounds from nature, is considered to be an environmentally sustainable and cost-effective means for environmental clean-up. However, a comprehensive understanding of the biodegradation potential of microbial communities and their response to decontamination measures is required for the effective management of bioremediation processes. In this thesis, the potential to use hydrocarbon-degradative genes as indicators of aerobic hydrocarbon biodegradation was investigated. Small-scale functional gene macro- and microarrays targeting aliphatic, monoaromatic and low molecular weight polyaromatic hydrocarbon biodegradation were developed in order to simultaneously monitor the biodegradation of mixtures of hydrocarbons. The validity of the array analysis in monitoring hydrocarbon biodegradation was evaluated in microcosm studies and field-scale bioremediation processes by comparing the hybridization signal intensities to hydrocarbon mineralization, real-time polymerase chain reaction (PCR), dot blot hybridization and both chemical and microbiological monitoring data. The results obtained by real-time PCR, dot blot hybridization and gene array analysis were in good agreement with hydrocarbon biodegradation in laboratory-scale microcosms. Mineralization of several hydrocarbons could be monitored simultaneously using gene array analysis. In the field-scale bioremediation processes, the detection and enumeration of hydrocarbon-degradative genes provided important additional information for process optimization and design. In creosote-contaminated groundwater, gene array analysis demonstrated that the aerobic biodegradation potential that was present at the site, but restrained under the oxygen-limited conditions, could be successfully stimulated with aeration and nutrient infiltration. During ex situ bioremediation of diesel oil- and lubrication oil-contaminated soil, the functional gene array analysis revealed inefficient hydrocarbon biodegradation, caused by poor aeration during composting. The functional gene array specifically detected upper and lower biodegradation pathways required for complete mineralization of hydrocarbons. Bacteria representing 1 % of the microbial community could be detected without prior PCR amplification. Molecular biological monitoring methods based on functional genes provide powerful tools for the development of more efficient remediation processes. The parallel detection of several functional genes using functional gene array analysis is an especially promising tool for monitoring the biodegradation of mixtures of hydrocarbons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aptitude-based student selection: A study concerning the admission processes of some technically oriented healthcare degree programmes in Finland (Orthotics and Prosthetics, Dental Technology and Optometry). The data studied consisted of conveniencesamples of preadmission information and the results of the admission processes of three technically oriented healthcare degree programmes (Orthotics and Prosthetics, Dental Technology and Optometry) in Finland during the years 1977-1986 and 2003. The number of the subjects tested and interviewed in the first samples was 191, 615 and 606, and in the second 67, 64 and 89, respectively. The questions of the six studies were: I. How were different kinds of preadmission data related to each other? II. Which were the major determinants of the admission decisions? III. Did the graduated students and those who dropped out differ from each other? IV. Was it possible to predict how well students would perform in the programmes? V. How was the student selection executed in the year 2003? VI. Should clinical vs. statistical prediction or both be used? (Some remarks are presented on Meehl's argument: "Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word.") The main results of the study were as follows: Ability tests, dexterity tests and judgements of personality traits (communication skills, initiative, stress tolerance and motivation) provided unique, non-redundant information about the applicants. Available demographic variables did not bias the judgements of personality traits. In all three programme settings, four-factor solutions (personality, reasoning, gender-technical and age-vocational with factor scores) could be extracted by the Maximum Likelihood method with graphical Varimax rotation. The personality factor dominated the final aptitude judgements and very strongly affected the selection decisions. There were no clear differences between graduated students and those who had dropped out in regard to the four factors. In addition, the factor scores did not predict how well the students performed in the programmes. Meehl's argument on the uncertainty of clinical prediction was supported by the results, which on the other hand did not provide any relevant data for rules on statistical prediction. No clear arguments for or against the aptitude-based student selection was presented. However, the structure of the aptitude measures and their impact on the admission process are now better known. The concept of "personal aptitude" is not necessarily included in the values and preferences of those in charge of organizing the schooling. Thus, obviously the most well-founded and cost-effective way to execute student selection is to rely on e.g. the grade point averages of the matriculation examination and/or written entrance exams. This procedure, according to the present study, would result in a student group which has a quite different makeup (60%) from the group selected on the basis of aptitude tests. For the recruiting organizations, instead, "personal aptitude" may be a matter of great importance. The employers, of course, decide on personnel selection. The psychologists, if consulted, are responsible for the proper use of psychological measures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pharmacogenetics deals with genetically determined variation in drug response. In this context, three phase I drug-metabolizing enzymes, CYP2D6, CYP2C9, and CYP2C19, have a central role, affecting the metabolism of about 20-30% of clinically used drugs. Since genes coding for these enzymes in human populations exhibit high genetic polymorphism, they are of major pharmacogenetic importance. The aims of this study were to develop new genotyping methods for CYP2D6, CYP2C9, and CYP2C19 that would cover the most important genetic variants altering the enzyme activity, and, for the first time, to describe the distribution of genetic variation at these loci on global and microgeographic scales. In addition, pharmacogenetics was applied to a postmortem forensic setting to elucidate the role of genetic variation in drug intoxications, focusing mainly on cases related to tricyclic antidepressants, which are commonly involved in fatal drug poisonings in Finland. Genetic variability data were obtained by genotyping new population samples by the methods developed based on PCR and multiplex single-nucleotide primer extension reaction, as well as by collecting data from the literature. Data consisted of 138, 129, and 146 population samples for CYP2D6, CYP2C9, and CYP2C19, respectively. In addition, over 200 postmortem forensic cases were examined with respect to drug and metabolite concentrations and genotypic variation at CYP2D6 and CYP2C19. The distribution of genetic variation within and among human populations was analyzed by descriptive statistics and variance analysis and by correlating the genetic and geographic distances using Mantel tests and spatial autocorrelation. The correlation between phenotypic and genotypic variation in drug metabolism observed in postmortem cases was also analyzed statistically. The genotyping methods developed proved to be informative, technically feasible, and cost-effective. Detailed molecular analysis of CYP2D6 genetic variation in a global survey of human populations revealed that the pattern of variation was similar to those of neutral genomic markers. Most of the CYP2D6 diversity was observed within populations, and the spatial pattern of variation was best described as clinal. On the other hand, genetic variants of CYP2D6, CYP2C9, and CYP2C19 associated with altered enzymatic activity could reach extremely high frequencies in certain geographic regions. Pharmacogenetic variation may also be significantly affected by population-specific demographic histories, as seen within the Finnish population. When pharmacogenetics was applied to a postmortem forensic setting, a correlation between amitriptyline metabolic ratios and genetic variation at CYP2D6 and CYP2C19 was observed in the sample material, even in the presence of confounding factors typical for these cases. In addition, a case of doxepin-related fatal poisoning was shown to be associated with a genetic defect at CYP2D6. Each of the genes studied showed a distinct variation pattern in human populations and high frequencies of altered activity variants, which may reflect the neutral evolution and/or selective pressures caused by dietary or environmental exposure. The results are relevant also from the clinical point of view since the genetic variation at CYP2D6, CYP2C9, and CYP2C19 already has a range of clinical applications, e.g. in cancer treatment and oral anticoagulation therapy. This study revealed that pharmacogenetics may also contribute valuable information to the medicolegal investigation of sudden, unexpected deaths.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Type 2 diabetes is an increasing, serious, and costly public health problem. The increase in the prevalence of the disease can mainly be attributed to changing lifestyles leading to physical inactivity, overweight, and obesity. These lifestyle-related risk factors offer also a possibility for preventive interventions. Until recently, proper evidence regarding the prevention of type 2 diabetes has been virtually missing. To be cost-effective, intensive interventions to prevent type 2 diabetes should be directed to people at an increased risk of the disease. The aim of this series of studies was to investigate whether type 2 diabetes can be prevented by lifestyle intervention in high-risk individuals, and to develop a practical method to identify individuals who are at high risk of type 2 diabetes and would benefit from such an intervention. To study the effect of lifestyle intervention on diabetes risk, we recruited 522 volunteer, middle-aged (aged 40 - 64 at baseline), overweight (body mass index > 25 kg/m2) men (n = 172) and women (n = 350) with impaired glucose tolerance to the Diabetes Prevention Study (DPS). The participants were randomly allocated either to the intensive lifestyle intervention group or the control group. The control group received general dietary and exercise advice at baseline, and had annual physician's examination. The participants in the intervention group received, in addition, individualised dietary counselling by a nutritionist. They were also offered circuit-type resistance training sessions and were advised to increase overall physical activity. The intervention goals were to reduce body weight (5% or more reduction from baseline weight), limit dietary fat (< 30% of total energy consumed) and saturated fat (< 10% of total energy consumed), and to increase dietary fibre intake (15 g / 1000 kcal or more) and physical activity (≥ 30 minutes/day). Diabetes status was assessed annually by a repeated 75 g oral glucose tolerance testing. First analysis on end-points was completed after a mean follow-up of 3.2 years, and the intervention phase was terminated after a mean duration of 3.9 years. After that, the study participants continued to visit the study clinics for the annual examinations, for a mean of 3 years. The intervention group showed significantly greater improvement in each intervention goal. After 1 and 3 years, mean weight reductions were 4.5 and 3.5 kg in the intervention group and 1.0 kg and 0.9 kg in the control group. Cardiovascular risk factors improved more in the intervention group. After a mean follow-up of 3.2 years, the risk of diabetes was reduced by 58% in the intervention group compared with the control group. The reduction in the incidence of diabetes was directly associated with achieved lifestyle goals. Furthermore, those who consumed moderate-fat, high-fibre diet achieved the largest weight reduction and, even after adjustment for weight reduction, the lowest diabetes risk during the intervention period. After discontinuation of the counselling, the differences in lifestyle variables between the groups still remained favourable for the intervention group. During the post-intervention follow-up period of 3 years, the risk of diabetes was still 36% lower among the former intervention group participants, compared with the former control group participants. To develop a simple screening tool to identify individuals who are at high risk of type 2 diabetes, follow-up data of two population-based cohorts of 35-64 year old men and women was used. The National FINRISK Study 1987 cohort (model development data) included 4435 subjects, with 182 new drug-treated cases of diabetes identified during ten years, and the FINRISK Study 1992 cohort (model validation data) included 4615 subjects, with 67 new cases of drug-treated diabetes during five years, ascertained using the Social Insurance Institution's Drug register. Baseline age, body mass index, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity and daily consumption of fruits, berries or vegetables were selected into the risk score as categorical variables. In the 1987 cohort the optimal cut-off point of the risk score identified 78% of those who got diabetes during the follow-up (= sensitivity of the test) and 77% of those who remained free of diabetes (= specificity of the test). In the 1992 cohort the risk score performed equally well. The final Finnish Diabetes Risk Score (FINDRISC) form includes, in addition to the predictors of the model, a question about family history of diabetes and the age category of over 64 years. When applied to the DPS population, the baseline FINDRISC value was associated with diabetes risk among the control group participants only, indicating that the intensive lifestyle intervention given to the intervention group participants abolished the diabetes risk associated with baseline risk factors. In conclusion, the intensive lifestyle intervention produced long-term beneficial changes in diet, physical activity, body weight, and cardiovascular risk factors, and reduced diabetes risk. Furthermore, the effects of the intervention were sustained after the intervention was discontinued. The FINDRISC proved to be a simple, fast, inexpensive, non-invasive, and reliable tool to identify individuals at high risk of type 2 diabetes. The use of FINDRISC to identify high-risk subjects, followed by lifestyle intervention, provides a feasible scheme in preventing type 2 diabetes, which could be implemented in the primary health care system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cost-effective mitigation of climate change is essential for both climate and environmental policy. Forest rotation age is one of the silvicultural measures by which the forest carbon stocks can be influenced with in accordance with the Kyoto Protocol, Article 3.4. The purpose of this study is to evaluate how forest rotation age affects carbon sequestration and the profitability of forestry. The relation between the forest rotation period optimizing forest owners’ discounted net returns over time and rotations which are 10, 20 and 30 years longer than the optimal rotation is examined. In addition, the cost of lengthening the rotation period is studied as well as whether carbon sequestration revenues can improve the profitability of forestry. The data used in the study consist of 16 stands located in Southern Finland. The main tree species in these stands were Norway spruce and Scots pine. Forest simulation tool MOTTI was used in the analysis. The results indicate that by lengthening the rotation period forest carbon stocks increase. However, as the rotation period is lengthened by more than 10 years, as a result of the diminishing growth curve, the rate of carbon sequestration slows down. The average discounted cost of carbon sequestration varied between 2.4 – 14.1 €/tCO2. Carbon sequestration rates in spruce stands were higher and the costs lower than those obtained from pine stands. The absence of carbon trading schemes is an obstacle for the commercialization of forest carbon sinks. In the future, research should concentrate on analysing what kind of operational models of carbon trading could be feasible in Finland.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Climate change is the single biggest environmental problem in the world at the moment. Although the effects are still not fully understood and there is considerable amount of uncertainty, many na-tions have decided to mitigate the change. On the societal level, a planner who tries to find an eco-nomically optimal solution to an environmental pollution problem seeks to reduce pollution from the sources where reductions are most cost-effective. This study aims to find out how effective the instruments of the agricultural policy are in the case of climate change mitigation in Finland. The theoretical base of this study is the neoclassical economic theory that is based on the assumption of a rational economic agent who maximizes his own utility. This theoretical base has been widened towards the direction clearly essential to the matter: the theory of environmental eco-nomics. Deeply relevant to this problem and central in the theory of environmental economics are the concepts of externalities and public goods. What are also relevant are the problems of global pollution and non-point-source pollution. Econometric modelling was the method that was applied to this study. The Finnish part of the AGMEMOD-model, covering the whole EU, was used for the estimation of the development of pollution. This model is a seemingly recursive, partially dynamic partial-equilibrium model that was constructed to predict the development of Finnish agricultural production of the most important products. For the study, I personally updated the model and also widened its scope in some relevant matters. Also, I devised a table that can calculate the emissions of greenhouse gases according to the rules set by the IPCC. With the model I investigated five alternative scenarios in comparison to the base-line scenario of Agenda 2000 agricultural policy. The alternative scenarios were: 1) the CAP reform of 2003, 2) free trade on agricultural commodities, 3) technological change, 4) banning the cultivation of organic soils and 5) the combination of the last three scenarios as the maximal achievement in reduction. The maximal achievement in the alternative scenario 5 was 1/3 of the level achieved on the base-line scenario. CAP reform caused only a minor reduction when com-pared to the base-line scenario. Instead, the free trade scenario and the scenario of technological change alone caused a significant reduction. The biggest single reduction was achieved by banning the cultivation of organic land. However, this was also the most questionable scenario to be real-ized, the reasons for this are further elaborated in the paper. The maximal reduction that can be achieved in the Finnish agricultural sector is about 11 % of the emission reduction that is needed to comply with the Kyoto protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Staphylococcus aureus is one of the most important bacteria that cause disease in humans, and methicillin-resistant S. aureus (MRSA) has become the most commonly identified antibiotic-resistant pathogen in many parts of the world. MRSA rates have been stable for many years in the Nordic countries and the Netherlands with a low MRSA prevalence in Europe, but in the recent decades, MRSA rates have increased in those low-prevalence countries as well. MRSA has been established as a major hospital pathogen, but has also been found increasingly in long-term facilities (LTF) and in communities of persons with no connections to the health-care setting. In Finland, the annual number of MRSA isolates reported to the National Infectious Disease Register (NIDR) has constantly increased, especially outside the Helsinki metropolitan area. Molecular typing has revealed numerous outbreak strains of MRSA, some of which have previously been associated with community acquisition. In this work, data on MRSA cases notified to the NIDR and on MRSA strain types identified with pulsed-field gel electrophoresis (PFGE), multilocus sequence typing (MLST), and staphylococcal cassette chromosome mec (SCCmec) typing at the National Reference Laboratory (NRL) in Finland from 1997 to 2004 were analyzed. An increasing trend in MRSA incidence in Finland from 1997 to 2004 was shown. In addition, non-multi-drug resistant (NMDR) MRSA isolates, especially those resistant only to methicillin/oxacillin, showed an emerging trend. The predominant MRSA strains changed over time and place, but two internationally spread epidemic strains of MRSA, FIN-16 and FIN-21, were related to the increase detected most recently. Those strains were also one cause of the strikingly increasing invasive MRSA findings. The rise of MRSA strains with SCCmec types IV or V, possible community-acquired MRSA was also detected. With questionnaires, the diagnostic methods used for MRSA identification in Finnish microbiology laboratories and the number of MRSA screening specimens studied were reviewed. Surveys, which focused on the MRSA situation in long-term facilities in 2001 and on the background information of MRSA-positive persons in 2001-2003, were also carried out. The rates of MRSA and screening practices varied widely across geographic regions. Part of the NMDR MRSA strains could remain undetected in some laboratories because of insufficient diagnostic techniques used. The increasing proportion of elderly population carrying MRSA suggests that MRSA is an emerging problem in Finnish long-term facilities. Among the patients, 50% of the specimens were taken on a clinical basis, 43% on a screening basis after exposure to MRSA, 3% on a screening basis because of hospital contact abroad, and 4% for other reasons. In response to an outbreak of MRSA possessing a new genotype that occurred in a health care ward and in an associated nursing home of a small municipality in Northern Finland in autumn 2003, a point-prevalence survey was performed six months later. In the same study, the molecular epidemiology of MRSA and methicillin-sensitive S. aureus (MSSA) strains were also assessed, the results to the national strain collection compared, and the difficulties of MRSA screening with low-level oxacillin-resistant isolates encountered. The original MRSA outbreak in LTF, which consisted of isolates possessing a nationally new PFGE profile (FIN-22) and internationally rare MLST type (ST-27), was confined. Another previously unrecognized MRSA strain was found with additional screening, possibly indicating that current routine MRSA screening methods may be insufficiently sensitive for strains possessing low-level oxacillin resistance. Most of the MSSA strains found were genotypically related to the epidemic MRSA strains, but only a few of them had received the SCCmec element, and all those strains possessed the new SCCmec type V. In the second largest nursing home in Finland, the colonization of S. aureus and MRSA, and the role of screening sites along with broth enrichment culture on the sensitivity to detect S. aureus were studied. Combining the use of enrichment broth and perineal swabbing, in addition to nostrils and skin lesions swabbing, may be an alternative for throat swabs in the nursing home setting, especially when residents are uncooperative. Finally, in order to evaluate adequate phenotypic and genotypic methods needed for reliable laboratory diagnostics of MRSA, oxacillin disk diffusion and MIC tests to the cefoxitin disk diffusion method at both +35°C and +30°C, both with or without an addition of sodium chloride (NaCl) to the Müller Hinton test medium, and in-house PCR to two commercial molecular methods (the GenoType® MRSA test and the EVIGENETM MRSA Detection test) with different bacterial species in addition to S. aureus were compared. The cefoxitin disk diffusion method was superior to that of oxacillin disk diffusion and to the MIC tests in predicting mecA-mediated resistance in S. aureus when incubating at +35°C with or without the addition of NaCl to the test medium. Both the Geno Type® MRSA and EVIGENETM MRSA Detection tests are usable, accurate, cost-effective, and sufficiently fast methods for rapid MRSA confirmation from a pure culture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Anadromous whitefish is one of the most important fish species in the Finnish coastal fisheries in the Gulf fo Bothnia. To compensate the lost reproduction due to river damming and to support the fisheries, several million one-summer old whitefish are released yearly into the Gulf of Bothnia. Since there are naturally reproducing whitefish in the Gulf as well, and the wild and stocked fish can not be separated in the catch, stocking impact can only be estimated by marking the stocked fish. Due to the small size and large number of released whitefish, the scattered fishery and large area where the whitefish migrate, most of the traditionally used fish marking methods were either unsuitable (e.g. Carlin-tags) or proved to be too expensive (e.g. coded wire tags). Fluorescent pigment spraying method offers a fast and cost-effective method to mass-mark young fish. However, the results are not always satisfactory due to low long-time retention of the marks in some species. The method has to be tested and proper marking conditions and methods determined for each species. This thesis is based on work that was accomplished while developing the fluorescent pigment spraying method for marking one-summer old whitefish fingerlings, and it draws together the results of mass-marking whitefish fingerlings that were released in the Gulf of Bothnia. Fluorescent pigment spraying method is suitable for one-summer old whitefish larger than 8 cm total length. The water temperature during the marking should not exceed 10o C. Suitable spraying pressure is 6 bars measured in the compressor outlet, and the distance of the spraying gun nozzle should be ca 20 cm from the fish. Under such conditions, the marking results in long-term retention of the mark with low or no mortality. The stress level of the fish (measured as muscle water content) rises during the marking procedure, but if the fish are allowed to recover after marking, the overall stress level remains within the limits observed in normal fish handling during the capture-loading-transport-stocking procedure. The marked whitefish fingerlings are released into the sea at larger size and later in the season than the wild whitefish. However, the stocked individuals migrate to the southern feeding grounds in a similar pattern to the wild ones. The catch produced by whitefish stocking in the Gulf of Bothnia varied between released fingerling groups, but was within the limits reported elsewhere in Finland. The releases in the southern Bothnian Bay resulted in a larger catch than those made in the northern Bothnian Bay. The size of the released fingerlings seemed to have some effect on survival of the fish during the first winter in the sea. However, when the different marking groups were compared, the mean fingerling size was not related to stocking success.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ruptured abdominal aortic aneurysm (RAAA) is a life-threatening event, and without operative treatment the patient will die. The overall mortality can be as high as 80-90%; thus repair of RAAA should be attempted whenever feasible. The quality of life (QoL) has become an increasingly important outcome measure in vascular surgery. Aim of the study was to evaluate outcomes of RAAA and to find out predictors of mortality. In Helsinki and Uusimaa district 626 patients were identified to have RAAA in 1996-2004. Altogether 352 of them were admitted to Helsinki University Central Hospital (HUCH). Based on Finnvasc Registry, 836 RAAA patients underwent repair of RAAA in 1991-1999. The 30-day operative mortality, hospital and population-based mortality were assessed, and the effect of regional centralisation and improving in-hospital quality on the outcome of RAAA. QoL was evaluated by a RAND-36 questionnaire of survivors of RAAA. Quality-adjusted life years (QALYs), which measure length and QoL, were calculated using the EQ-5D index and estimation of life expectancy. The predictors of outcome after RAAA were assessed at admission and 48 hours after repair of RAAA. The 30-day operative mortality rate was 38% in HUCH and 44% nationwide, whereas the hospital mortality was 45% in HUCH. Population-based mortality was 69% in 1996-2004 and 56% in 2003-2004. After organisational changes were undertaken, the mortality decreased significantly at all levels. Among the survivors, the QoL was almost equal when compared with norms of age- and sex-matched controls; only physical functioning was slightly impaired. Successful repair of RAAA gave a mean of 4.1 (0-30.9) QALYs for all RAAA patients, although non-survivors were included. The preoperative Glasgow Aneurysm Score was an independent predictor of 30-day operative mortality after RAAA, and it also predicted the outcome at 48- hours for initial survivors of repair of RAAA. A high Glasgow Aneurysm Score and high age were associated with low numbers of QALYs to be achieved. Organ dysfunction measured by the Sequential Organ Failure Assessment (SOFA) score at 48 hours after repair of RAAA was the strongest predictor of death. In conclusion surgery of RAAA is a life-saving and cost-effective procedure. The centralisation of vascular emergencies improved the outcome of RAAA patients. The survivors had a good QoL after RAAA. Predictive models can be used on individual level only to provide supplementary information for clinical decision-making due to their moderate discriminatory value. These results support an active operation policy, as there is no reliable measure to predict the outcome after RAAA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Severe sepsis is associated with common occurrence, high costs of care and significant mortality. The incidence of severe sepsis has been reported to vary between 0.5/1000 and 3/1000 in different studies. The worldwide Severe Sepsis Campaign, guidelines and treatment protocols aim at decreasing severe sepsis associated high morbidity and mortality. Various mediators of inflammation, such as high mobility group box-1 protein (HMGB1) and vascular endothelial growth factor (VEGF), have been tested for severity of illness and outcome in severe sepsis. Long-term survival with quality of life (QOL) assessment is important outcome after severe sepsis. The objective of this study was to evaluate the incidence, severity of organ dysfunction and outcome of severe sepsis in intensive care treated patients in Finland (study I)). HMGB1 and VEGF were studied in predicting severity of illness, development and type of organ dysfunction and hospital mortality (studies II and III). The long-term outcome and quality of life were assessed and quality-adjusted life years and cost per one QALY were estimated (study IV). A total of 470 patients with severe sepsis were included in the Finnsepsis Study. Patients were treated in 24 Finnish intensive care units in a 4-month period from 1 November 2004 to 28 February 2005. The incidence of severe sepsis was 0.38 /1,000 in the adult population (95% confidence interval 0.34-0.41). Septic shock (77%), severe oxygenation impairment (71.4%) and acute renal failure (23.2%) were the most common organ failures. The ICU, hospital, one-year and two-year mortalities were 15.5%, 28.3%, 40.9% and 44.9% respectively. HMGB1 and VEGF were elevated in patients with severe sepsis. VEGF concentrations were lower in non-survivors than in survivors, but HMGB1 levels did not differ between patients. Neither HMGB1 nor VEGF were predictive of hospital mortality. The QOL was measured median 17 months after severe sepsis and QOL was lower than in reference population. The mean QALY was 15.2 years for a surviving patient and the cost for one QALY was 2,139 . The study showed that the incidence of severe sepsis is lower in Finland than in other countries. The short-term outcome is comparable with that in other countries, but long-term outcome is poor. HMGB1 and VEGF are not useful in predicting mortality in severe sepsis. The mean QALY for a surviving patient is 15.2 and as the cost for one QALY is reasonably low, the intensive care is cost-effective in patients with severe sepsis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The greatest effect on reducing mortality in breast cancer comes from the detection and treatment of invasive cancer when it is as small as possible. Although mammography screening is known to be effective, observer errors are frequent and false-negative cancers can be found in retrospective studies of prior mammograms. In the year 2001, 67 women with 69 surgically proven cancers detected at screening in the Mammography Centre of Helsinki University Hospital had previous mammograms as well. These mammograms were analyzed by an experienced screening radiologist, who found that 36 lesions were already visible in previous screening rounds. CAD (Second Look v. 4.01) detected 23 of these missed lesions. Eight readers with different kinds of experience with mammography screening read the films of 200 women with and without CAD. These films included 35 of those missed lesions and 16 screen-detected cancers. CAD sensitivity was 70.6% and specificity 15.8%. Use of CAD lengthened the mean time spent for readings but did not significantly affect readers sensitivities or specificities. Therefore the use of applied version of CAD (Second Look v. 4.01) is questionable. Because none of those eight readers found exactly same cancers, two reading methods were compared: summarized independent reading (at least a single cancer-positive opinion within the group considered decisive) and conference consensus reading (the cancer-positive opinion of the reader majority was considered decisive). The greatest sensitivity of 74.5% was achieved when the independent readings of 4 best-performing readers were summarized. Overall the summarized independent readings were more sensitive than conference consensus readings (64.7% vs. 43.1%) while there was far less difference in mean specificities (92.4% vs. 97.7%). After detecting suspicious lesion, the radiologist has to decide what is the most accurate, fast, and cost-effective means of further work-up. The feasibility of FNAC and CNB in the diagnosis of breast lesions was compared in non-randomised, retrospective study of 580 (503 malignant) breast lesions of 572 patients. The absolute sensitivity for CNB was better than for FNAC, 96% (206/214) vs. 67% (194/289) (p < 0.0001). An additional needle biopsy or surgical biopsy was performed for 93 and 62 patients with FNAC, but for only 2 and 33 patients with CNB. The frequent need of supplement biopsies and unnecessary axillary operations due to false-positive findings made FNAC (294 ) more expensive than CNB (223 ), and because the advantage of quick analysis vanishes during the overall diagnostic and referral process, it is recommendable to use CNB as initial biopsy method.