28 resultados para Liquid level indicators.

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past twenty years, several indicator sets have been produced on international, national and regional levels. Most of the work has concentrated on the selection of the indicators and on collection of the pertinent data, but less attention has been given to the actual users and their needs. This dissertation focuses on the use of sustainable development indicator sets. The dissertation explores the reasons that have deterred the use of the indicators, discusses the role of sustainable development indicators in a policy-cycle and broadens the view of use by recognising three different types of use. The work presents two indicator development processes: The Finnish national sustainable development indicators and the socio-cultural indicators supporting the measurement of eco-efficiency in the Kymenlaakso Region. The sets are compared by using a framework created in this work to describe indicator process quality. It includes five principles supported by more specific criteria. The principles are high policy relevance, sound indicator quality, efficient participation, effective dissemination and long-term institutionalisation. The framework provided a way to identify the key obstacles for use. The two immediate problems with current indicator sets are that the users are unaware of them and the indicators are often unsuitable to their needs. The reasons for these major flaws are irrelevance of the indicators to the policy needs, technical shortcomings in the context and presentation, failure to engage the users in the development process, non-existent dissemination strategies and lack of institutionalisation to promote and update the indicators. The importance of the different obstacles differs among the users and use types. In addition to the indicator projects, materials used in the dissertation include 38 interviews of high-level policy-makers or civil servants close to them, statistics of the national indicator Internet-page downloads, citations of the national indicator publication, and the media coverage of both indicator sets. According to the results, the most likely use for a sustainable development indicator set by policy-makers is to learn about the concept. Very little evidence of direct use to support decision-making was available. Conceptual use is also common for other user groups, namely the media, civil servants, researchers, students and teachers. Decision-makers themselves consider the most obvious use for the indicators to be the promotion of their own views which is a form of legitimising use. The sustainable development indicators have different types of use in the policy cycle and most commonly expected instrumental use is not very likely or even desirable at all stages. Stages of persuading the public and the decision-makers about new problems as well as in formulating new policies employ legitimising use. Learning by conceptual use is also inherent to policy-making as people involved learn about the new situation. Instrumental use is most likely in policy formulation, implementation and evaluation. The dissertation is an article dissertation, including five papers that are published in scientific journals and an extensive introductory chapter that discusses and weaves together the papers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial ecology is an important field of sustainability science. It can be applied to study environmental problems in a policy relevant manner. Industrial ecology uses ecosystem analogy; it aims at closing the loop of materials and substances and at the same time reducing resource consumption and environmental emissions. Emissions from human activities are related to human interference in material cycles. Carbon (C), nitrogen (N) and phosphorus (P) are essential elements for all living organisms, but in excess have negative environmental impacts, such as climate change (CO2, CH4 N2O), acidification (NOx) and eutrophication (N, P). Several indirect macro-level drivers affect emissions change. Population and affluence (GDP/capita) often act as upward drivers for emissions. Technology, as emissions per service used, and consumption, as economic intensity of use, may act as drivers resulting in a reduction in emissions. In addition, the development of country-specific emissions is affected by international trade. The aim of this study was to analyse changes in emissions as affected by macro-level drivers in different European case studies. ImPACT decomposition analysis (IPAT identity) was applied as a method in papers I III. The macro-level perspective was applied to evaluate CO2 emission reduction targets (paper II) and the sharing of greenhouse gas emission reduction targets (paper IV) in the European Union (EU27) up to the year 2020. Data for the study were mainly gathered from official statistics. In all cases, the results were discussed from an environmental policy perspective. The development of nitrogen oxide (NOx) emissions was analysed in the Finnish energy sector during a long time period, 1950 2003 (paper I). Finnish emissions of NOx began to decrease in the 1980s as the progress in technology in terms of NOx/energy curbed the impact of the growth in affluence and population. Carbon dioxide (CO2) emissions related to energy use during 1993 2004 (paper II) were analysed by country and region within the European Union. Considering energy-based CO2 emissions in the European Union, dematerialization and decarbonisation did occur, but not sufficiently to offset population growth and the rapidly increasing affluence during 1993 2004. The development of nitrogen and phosphorus load from aquaculture in relation to salmonid consumption in Finland during 1980 2007 was examined, including international trade in the analysis (paper III). A regional environmental issue, eutrophication of the Baltic Sea, and a marginal, yet locally important source of nutrients was used as a case. Nutrient emissions from Finnish aquaculture decreased from the 1990s onwards: although population, affluence and salmonid consumption steadily increased, aquaculture technology improved and the relative share of imported salmonids increased. According to the sustainability challenge in industrial ecology, the environmental impact of the growing population size and affluence should be compensated by improvements in technology (emissions/service used) and with dematerialisation. In the studied cases, the emission intensity of energy production could be lowered for NOx by cleaning the exhaust gases. Reorganization of the structure of energy production as well as technological innovations will be essential in lowering the emissions of both CO2 and NOx. Regarding the intensity of energy use, making the combustion of fuels more efficient and reducing energy use are essential. In reducing nutrient emissions from Finnish aquaculture to the Baltic Sea (paper III) through technology, limits of biological and physical properties of cultured fish, among others, will eventually be faced. Regarding consumption, salmonids are preferred to many other protein sources. Regarding trade, increasing the proportion of imports will outsource the impacts. Besides improving technology and dematerialization, other viewpoints may also be needed. Reducing the total amount of nutrients cycling in energy systems and eventually contributing to NOx emissions needs to be emphasized. Considering aquaculture emissions, nutrient cycles can be partly closed through using local fish as feed replacing imported feed. In particular, the reduction of CO2 emissions in the future is a very challenging task when considering the necessary rates of dematerialisation and decarbonisation (paper II). Climate change mitigation may have to focus on other greenhouse gases than CO2 and on the potential role of biomass as a carbon sink, among others. The global population is growing and scaling up the environmental impact. Population issues and growing affluence must be considered when discussing emission reductions. Climate policy has only very recently had an influence on emissions, and strong actions are now called for climate change mitigation. Environmental policies in general must cover all the regions related to production and impacts in order to avoid outsourcing of emissions and leakage effects. The macro-level drivers affecting changes in emissions can be identified with the ImPACT framework. Statistics for generally known macro-indicators are currently relatively well available for different countries, and the method is transparent. In the papers included in this study, a similar method was successfully applied in different types of case studies. Using transparent macro-level figures and a simple top-down approach are also appropriate in evaluating and setting international emission reduction targets, as demonstrated in papers II and IV. The projected rates of population and affluence growth are especially worth consideration in setting targets. However, sensitivities in calculations must be carefully acknowledged. In the basic form of the ImPACT model, the economic intensity of consumption and emission intensity of use are included. In seeking to examine consumption but also international trade in more detail, imports were included in paper III. This example demonstrates well how outsourcing of production influences domestic emissions. Country-specific production-based emissions have often been used in similar decomposition analyses. Nevertheless, trade-related issues must not be ignored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

My Ph.D. dissertation presents a multi-disciplinary analysis of the mortuary practices of the Tiwanaku culture of the Bolivian high plateau, situated at an altitude of c. 3800 m above sea level. The Tiwanaku State (c. AD 500-1150) was one of the most important pre-Inca civilisations of the South Central Andes. The book begins with a brief introductory chapter. In chapter 2 I discuss methodological and theoretical developments in archaeological mortuary studies from the late 1960s until the turn of the millennium. I am especially interested in the issue how archaeological burial data can be used to draw inferences on the social structure of prehistoric societies. Chapter 3 deals with the early historic sources written in the 16th and 17th centuries, following the Spanish Conquest of the Incas. In particular, I review information on how the Incas manifested status differences between and within social classes and what kinds of burial treatments they applied. In chapter 4 I compare the Inca case with 20th century ethnographic data on the Aymara Indians of the Bolivian high plateau. Even if Christianity has affected virtually every level of Aymara religion, surprisingly many traditional features can still be observed in present day Aymara mortuary ceremonies. The archaeological part of my book begins with chapter 5, which is an introduction into Tiwanaku archaeology. In the next chapter, I present an overview of previously reported Tiwanaku cemeteries and burials. Chapter 7 deals with my own excavations at the Late Tiwanaku/early post-Tiwanaku cemetery site of Tiraska, located on the south-eastern shore of Lake Titicaca. During the 1998, 2002, and 2003 field seasons, a total of 32 burials were investigated at Tiraska. The great majority of these were subterranean stone-lined tombs, each containing the skeletal remains of 1 individual and 1-2 ceramic vessels. Nine burials have been radiocarbon dated, the dates in question indicating that the cemetery was in use from the 10th until the 13th century AD. In chapter 8 I point out that considerable regional and/or ethnic differences can be noted between studied Tiwanaku cemetery sites. Because of the mentioned differences, and a general lack of securely dated burial contexts, I feel that at present we can do no better than to classify most studied Tiwanaku burials into three broad categories: (1) elite and/or priests, (2) "commoners", and (3) sacrificial victims and/or slaves and/or prisoners of war. On the basis of such indicators as monumental architecture and occupational specialisation we would expect to find considerable status-related differences in tomb size, grave goods, etc. among the Tiwanaku. Interestingly, however, such variation is rather modest, and the Tiwanaku seem to have been a lot less interested in expending considerable labour and resources in burial facilities than their pre-Columbian contemporaries of many parts of the Central Andes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The influence of the architecture of the Byzantine capital spread to the Mediterranean provinces with travelling masters and architects. In this study the architecture of the Constantinopolitan School has been detected on the basis of the typology of churches, completed by certain morphological aspects when necessary. The impact of the Constantinopolitan workshops appears to have been more important than previously realized. This research revealed that the Constantinopolitan composite domed inscribed-cross type or cross-in-square spread everywhere to the Balkans and it was assumed soon by the local schools of architecture. In addition, two novel variants were invented on the basis of this model: the semi-composite type and the so-called Athonite type. In the latter variant lateral conches, choroi, were added for liturgical reasons. Instead, the origin of the domed ambulatory church was partly provincial. One result of this study is that the origin of the Middle Byzantine domed octagonal types was traced to Constantinople. This is attested on the basis of the archaeological evidence. Also some other architectural elements that have not been preserved in the destroyed capital have survived at the provincial level: the domed hexagonal type, the multi-domed superstructure, the pseudo-octagon and the narthex known as the lite. The Constantinopolitan architecture during the period in question was based on the Early Christian and Late Antique forms, practices and innovations and this also emerges at the provincial level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Material and immaterial security. Households, ecological and economic resources and formation of contacts in Valkeala parish from the 1630s to the 1750s. The geographical area of the thesis, Valkeala parish in the region of Kymenlaakso, is a very interesting area owing to its diversity, both in terms of natural setting and economic and cultural structure. The study begins by outlining the ecological and economic features of Valkeala and by analysing household structures. The main focus of the research lies in the contacts of the households with the outside world. The following types of contacts are chosen as indicators of the interaction: trade and credit relations, guarantees, co-operation, marriages and godparentage. The main theme of the contact analysis is to observe the significance of three factors, namely geographical extent, affluence level and kinship, to the formation of contacts. It is also essential to chart the interdependencies between ecological and economic resources, changes in the structure of households and the formation of contacts during the period studied. The time between the 1630s and the 1750s was characterized by wars, crop losses and population changes, which had an effect on the economic framework and on the structural variation of households and contact fields. In the 17th and 18th centuries Valkeala could be divided, economically, into two sections according to the predominant cultivation technique. The western area formed the field area and the eastern and northern villages the swidden area. Multiple family households were dominant in the latter part of the 17th century, and for most of the study period, the majority of people lived in the more complex households rather than in simple families. Economic resources had only a moderate impact on the structure of contacts. There was a clear connection between bigger household size and the extent and intensity of contacts. The jurisdictional boundary that ran across Valkeala from the northwest to the southeast and divided the parish into two areas influenced the formation of contacts more than the parish boundaries. Support and security were offered largely by the primary contacts with one s immediate family, neighbours and friends. Economic support was channelled from the wealthier to the less well off by credits. Cross-marriages, cross-godparentage and marital networks could be seen as manifestations of an aim towards stability and the joining of resources. It was essential for households both to secure the workforce needed for a minimum level of subsistence and to ensure the continuation of the family line. These goals could best be reached by complex households that could adapt to the prevailing circumstances and also had wider and more multi-layered contacts offering material and immaterial security.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present research focused on motivational and personality traits measuring individual differences in the experience of negative affect, in reactivity to negative events, and in the tendency to avoid threats. In this thesis, such traits (i.e., neuroticism and dispositional avoidance motivation) are jointly referred to as trait avoidance motivation. The seven studies presented here examined the moderators of such traits in predicting risk judgments, negatively biased processing, and adjustment. Given that trait avoidance motivation encompasses reactivity to negative events and tendency to avoid threats, it can be considered surprising that this trait does not seem to be related to risk judgments and that it seems to be inconsistently related to negatively biased information processing. Previous work thus suggests that some variable(s) moderate these relations. Furthermore, recent research has suggested that despite the close connection between trait avoidance motivation and (mal)adjustment, measures of cognitive performance may moderate this connection. However, it is unclear whether this moderation is due to different response processes between individuals with different cognitive tendencies or abilities, or to the genuinely buffering effect of high cognitive ability against the negative consequences of high trait avoidance motivation. Studies 1-3 showed that there is a modest direct relation between trait avoidance motivation and risk judgments, but studies 2-3 demonstrated that state motivation moderates this relation. In particular, individuals in an avoidance state made high risk judgments regardless of their level of trait avoidance motivation. This result explained the disparity between the theoretical conceptualization of avoidance motivation and the results of previous studies suggesting that the relation between trait avoidance motivation and risk judgments is weak or nonexistent. Studies 5-6 examined threat identification tendency as a moderator for the relationship between trait avoidance motivation and negatively biased processing. However, no evidence for such moderation was found. Furthermore, in line with previous work, the results of studies 5-6 suggested that trait avoidance motivation is inconsistently related to negatively biased processing, implying that theories concerning traits and information processing may need refining. Study 7 examined cognitive ability as a moderator for the relation between trait avoidance motivation and adjustment, and demonstrated that cognitive ability moderates the relation between trait avoidance motivation and indicators of both self-reported and objectively measured adjustment. Thus, the results of Study 7 supported the buffer explanation for the moderating influence of cognitive performance. To summarize, the results showed that it is possible to find factors that consistently moderate the relations between traits and important outcomes (e.g. adjustment). Identifying such factors and studying their interplay with traits is one of the most important goals of current personality research. The present thesis contributed to this line of work in relation to trait avoidance motivation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Occupational burnout and heath Occupational burnout is assumed to be a negative consequence of chronic work stress. In this study, it was explored in the framework of occupational health psychology, which focusses on psychologically mediated processes between work and health. The objectives were to examine the overlap between burnout and ill health in relation to mental disorders, musculoskeletal disorders, and cardiovascular diseases, which are the three commonest disease groups causing work disability in Finland; to study whether burnout can be distinguished from ill health by its relation to work characteristics and work disability; and to determine the socio-demographic correlates of burnout at the population level. A nationally representative sample of the Finnish working population aged 30 to 64 years (n = 3151-3424) from the multidisciplinary epidemiological Health 2000 Study was used. Burnout was measured with the Maslach Burnout Inventory - General Survey. The diagnoses of common mental disorders were based on the standardized mental health interview (the Composite International Diagnostic Interview), and physical illnesses were determined in a comprehensive clinical health examination by a research physician. Medically certified sickness absences exceeding 9 work days during a 2-year period were extracted from a register of The Social Insurance Institution of Finland. Work stress was operationalized according to the job strain model. Gender, age, education, occupational status, and marital status were recorded as socio-demographic factors. Occupational burnout was related to an increased prevalence of depressive and anxiety disorders and alcohol dependence among the men and women. Burnout was also related to musculoskeletal disorders among the women and cardiovascular diseases among the men independently of socio-demographic factors, physical strenuousness of work, health behaviour, and depressive symptoms. The odds of having at least one long, medically-certified sickness absence were higher for employees with burnout than for their colleagues without burnout. For severe burnout, this association was independent of co-occurring common mental disorders and physical illnesses for both genders, as was also the case for mild burnout among the women. In a subgroup of the men with absences, severe burnout was related to a greater number of absence days than among the women with absences. High job strain was associated with a higher occurrence of burnout and depressive disorders than low job strain was. Of these, the association between job strain and burnout was stronger, and it persisted after control for socio-demographic factors, health behaviour, physical illnesses, and various indicators of mental health. In contrast, job strain was not related to depressive disorders after burnout was accounted for. Among the working population over 30 years of age, burnout was positively associated with age. There was also a tendency towards higher levels of burnout among the women with low educational attainment and occupational status and among the unmarried men. In conclusion, a considerable overlap was found between burnout, mental disorders, and physical illnesses. Still, burnout did not seem to be totally redundant with respect to ill health. Burnout may be more strongly related to stressful work characteristics than depressive disorders are. In addition, burnout seems to be an independent risk factor for work disability, and it could possibly be used as a marker of health-impairing work stress. However, burnout may represent a different kind of risk factor for men and women, and this possibility needs to be taken into account in the promotion of occupational health.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ozone (O3) is a reactive gas present in the troposphere in the range of parts per billion (ppb), i.e. molecules of O3 in 109 molecules of air. Its strong oxidative capacity makes it a key element in tropospheric chemistry and a threat to the integrity of materials, including living organisms. Knowledge and control of O3 levels are an issue in relation to indoor air quality, building material endurance, respiratory human disorders, and plant performance. Ozone is also a greenhouse gas and its abundance is relevant to global warming. The interaction of the lower troposphere with vegetated landscapes results in O3 being removed from the atmosphere by reactions that lead to the oxidation of plant-related components. Details on the rate and pattern of removal on different landscapes as well as the ultimate mechanisms by which this occurs are not fully resolved. This thesis analysed the controlling processes of the transfer of ozone at the air-plant interface. Improvement in the knowledge of these processes benefits the prediction of both atmospheric removal of O3 and its impact on vegetation. This study was based on the measurement and analysis of multi-year field measurements of O3 flux to Scots pine (Pinus sylvestris L.) foliage with a shoot-scale gas-exchange enclosure system. In addition, the analyses made use of simultaneous CO2 and H2O exchange, canopy-scale O3, CO2 and H2O exchange, foliage surface wetness, and environmental variables. All data was gathered at the SMEAR measuring station (southern Finland). Enclosure gas-exchange techniques such as those commonly used for the measure of CO2 and water vapour can be applied to the measure of ozone gas-exchange in the field. Through analysis of the system dynamics the occurring disturbances and noise can be identified. In the system used in this study, the possible artefacts arising from the ozone reactivity towards the system materials in combination with low background concentrations need to be taken into account. The main artefact was the loss of ozone towards the chamber walls, which was found to be very variable. The level of wall-loss was obtained from simultaneous and continuous measurements, and was included in the formulation of the mass balance of O3 concentration inside the chamber. The analysis of the field measurements in this study show that the flux of ozone to the Scots pine foliage is generated in about equal proportions by stomatal and non-stomatal controlled processes. Deposition towards foliage and forest is sustained also during night and winter when stomatal gas-exchange is low or absent. The non-stomatal portion of the flux was analysed further. The pattern of flux in time was found to be an overlap of the patterns of biological activity and presence of wetness in the environment. This was seen to occur both at the shoot and canopy scale. The presence of wetness enhanced the flux not only in the presence of liquid droplets but also during existence of a moisture film on the plant surfaces. The existence of these films and their relation to the ozone sinks was determined by simultaneous measurements of leaf surface wetness and ozone flux. The results seem to suggest ozone would be reacting at the foliage surface and the reaction rate would be mediated by the presence of surface wetness. Alternative mechanisms were discussed, including nocturnal stomatal aperture and emission of reactive volatile compounds. The prediction of the total flux could thus be based on a combination of a model of stomatal behaviour and a model of water absorption on the foliage surfaces. The concepts behind the division of stomatal and non-stomatal sinks were reconsidered. This study showed that it is theoretically possible that a sink located before or near the stomatal aperture prevents or diminishes the diffusion of ozone towards the intercellular air space of the mesophyll. This obstacle to stomatal diffusion happens only under certain conditions, which include a very low presence of reaction sites in the mesophyll, an extremely strong sink located on the outer surfaces or stomatal pore. The relevance, or existence, of this process in natural conditions would need to be assessed further. Potentially strong reactions were considered, including dissolved sulphate, volatile organic compounds, and apoplastic ascorbic acid. Information on the location and the relative abundance of these compounds would be valuable. The highest total flux towards the foliage and forest happens when both the plant activity and ambient moisture are high. The highest uptake into the interior of the foliage happens at large stomatal apertures, provided that scavenging reactions located near the stomatal pore are weak or non-existent. The discussion covers the methodological developments of this study, the relevance of the different controlling factors of ozone flux, the partition amongst its component, and the possible mechanisms of non-stomatal uptake.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The antioxidant activity of natural plant materials rich in phenolic compounds is being widely investigated for protection of food products sensitive to oxidative reactions. In this thesis plant materials rich in phenolic compounds were studied as possible antioxidants to prevent protein and lipid oxidation reactions in different food matrixes such as pork meat patties and corn oil-in water emulsions. Loss of anthocyanins was also measured during oxidation in corn oil-in-water emulsions. In addition, the impact of plant phenolics on amino acid level was studied using tryptophan as a model compound to elucidate their role in preventing the formation of tryptophan oxidation products. A high-performance liquid chromatography (HPLC) method with ultraviolet and fluorescence detection (UV-FL) was developed that enabled fast investigation of formation of tryptophan derived oxidation products. Byproducts of oilseed processes such as rapeseed (Brassica rapa L.), camelina (Camelina sativa) and soy meal (Glycine max L.) as well as Scots pine bark (Pinus sylvestris) and several reference compounds were shown to act as antioxidants toward both protein and lipid oxidation in cooked pork meat patties. In meat, the antioxidant activity of camelina, rapeseed and soy meal were more pronounced when used in combination with a commercial rosemary extract (Rosmarinus officinalis). Berry phenolics such as black currant (Ribes nigrum) anthocyanins and raspberry (Rubus idaeus) ellagitannins showed potent antioxidant activity in corn oil-in-water emulsions toward lipid oxidation with and without β-lactoglobulin. The antioxidant effect was more pronounced in the presence of β-lactoglobulin. The berry phenolics also inhibited the oxidation of tryptophan and cysteine side chains of β-lactoglobulin. The results show that the amino acid side chains were oxidized prior the propagation of lipid oxidation, thereby inhibiting fatty acid scission. In addition, the concentration and color of black currant anthocyanins decreased during the oxidation. Oxidation of tryptophan was investigated in two different oxidation models with hydrogen peroxide (H2O2) and hexanal/FeCl2. Oxidation of tryptophan in both models resulted in oxidation products such as 3a-hydroxypyrroloindole-2-carboxylic acid, dioxindolylalanine, 5-hydroxy-tryptophan, kynurenine, N-formylkynurenine and β-oxindolylalanine. However, formation of tryptamine was only observed in tryptophan oxidized in the presence of H2O2. Pine bark phenolics, black currant anthocyanins, camelina meal phenolics as well as cranberry proanthocyanidins (Vaccinium oxycoccus) provided the best antioxidant effect toward tryptophan and its oxidation products when oxidized with H2O2. The tryptophan modifications formed upon hexanal/FeCl2 treatment were efficiently inhibited by camelina meal followed by rapeseed and soy meal. In contrast, phenolics from raspberry, black currant, and rowanberry (Sorbus aucuparia) acted as weak prooxidants. This thesis contributes to elucidating the effects of natural phenolic compounds as potential antioxidants in order to control and prevent protein and lipid oxidation reactions. Understanding the relationship between phenolic compounds and proteins as well as lipids could lead to the development of new, effective, and multifunctional antioxidant strategies that could be used in food, cosmetic and pharmaceutical applications.