862 resultados para valid caveat protecting recognisable caveatable interest
Resumo:
Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.
Resumo:
Protecting the Australian citrus industry from HLB (greening) disease.
Resumo:
My work describes two sectors of the human bacterial environment: 1. The sources of exposure to infectious non-tuberculous mycobacteria. 2. Bacteria in dust, reflecting the airborne bacterial exposure in environments protecting from or predisposing to allergic disorders. Non-tuberculous mycobacteria (NTM) transmit to humans and animals from the environment. Infection by NTM in Finland has increased during the past decade beyond that by Mycobacterium tuberculosis. Among the farm animals, porcine mycobacteriosis is the predominant NTM disease in Finland. Symptoms of mycobacteriosis are found in 0.34 % of slaughtered pigs. Soil and drinking water are suspected as sources for humans and bedding materials for pigs. To achieve quantitative data on the sources of human and porcine NTM exposure, methods for quantitation of environmental NTM are needed. We developed a quantitative real-time PCR method, utilizing primers targeted at the 16S rRNA gene of the genus of Mycobacterium. With this method, I found in Finnish sphagnum peat, sandy soils and mud high contents of mycobacterial DNA, 106 to 107 genome equivalents per gram. A similar result was obtained by a method based on the Mycobacterium-specific hybridization of 16S rRNA. Since rRNA is found mainly in live cells, this result shows that the DNA detected by qPCR mainly represented live mycobacteria. Next, I investigated the occurrence of environmental mycobacteria in the bedding materials obtained from 5 pig farms with high prevalence (>4 %) of mycobacteriosis. When I used for quantification the same qPCR methods as for the soils, I found that piggery samples contained non-mycobacterial DNA that was amplified in spite of several mismatches with the primers. I therefore improved the qPCR assay by designing Mycobacterium-specific detection probes. Using the probe qPCR assay, I found 105 to 107 genome equivalents of mycobacterial DNA in unused bedding materials and up to 1000 fold more in the bedding collected after use in the piggery. This result shows that there was a source of mycobacteria in the bedding materials purchased by the piggery and that mycobacteria increased in the bedding materials during use in the piggery. Allergic diseases have reached epidemic proportions in urbanized countries. At the same time, childhood in rural environment or simple living conditions appears to protect against allergic disorders. Exposure to immunoreactive microbial components in rural environments seems to prevent allergies. I searched for differences in the bacterial communities of two indoor dusts, an urban house dust shown to possess immunoreactivity of the TH2-type and a farm barn dust with TH1-activity. The immunoreactivities of the dusts were revealed by my collaborators, in vitro in human dendritic cells and in vivo in mouse. The dusts accumulated >10 years in the respiratory zone (>1.5 m above floor), thus reflecting the long-term content of airborne bacteria at the two sites. I investigated these dusts by cloning and sequencing of bacterial 16S rRNA genes from dust contained DNA. From the TH2-active urban house dust, I isolated 139 16S rRNA gene clones. The most prevalent genera among the clones were Corynebacterium (5 species, 34 clones), Streptococcus (8 species, 33 clones), Staphylococcus (5 species, 9 clones) and Finegoldia (1 species, 9 clones). Almost all of these species are known as colonizers of the human skin and oral cavity. Species of Corynebacterium and Streptococcus have been reported to contain anti-inflammatory lipoarabinomannans and immunmoreactive beta-glucans respectively. Streptococcus mitis, found in the urban house dust is known as an inducer of TH2 polarized immunity, characteristic of allergic disorders. I isolated 152 DNA clones from the TH1-active farm barn dust and found species quite different from those found from the urban house dust. Among others, I found DNA clones representing Bacillus licheniformis, Acinetobacter lwoffii and Lactobacillus each of which was recently reported to possess anti-allergy immunoreactivity. Moreover, the farm barn dust contained dramatically higher bacterial diversity than the urban house dust. Exposure to this dust thus stimulated the human dendritic cells by multiple microbial components. Such stimulation was reported to promote TH1 immunity. The biodiversity in dust may thus be connected to its immunoreactivity. Furthermore, the bacterial biomass in the farm barn dust consisted of live intact bacteria mainly. In the urban house dust only ~1 % of the biomass appeared as intact bacteria, as judged by microscoping. Fragmented microbes may possess bioactivity different from that of intact cells. This was recently shown for moulds. If this is also valid for bacteria, the different immunoreactivities of the two dusts may be explained by the intactness of dustborne bacteria. Based on these results, we offer three factors potentially contributing to the polarized immunoreactivities of the two dusts: (i) the species-composition, (ii) the biodiversity and (iii) the intactness of the dustborne bacterial biomass. The risk of childhood atopic diseases is 4-fold lower in the Russian compared with the Finnish Karelia. This difference across the country border is not explainable by different geo-climatic factors or genetic susceptibilities of the two populations. Instead, the explanation must be lifestyle-related. It has already been reported that the microbiological quality of drinking water differs on the two sides of the borders. In collaboration with allergists, I investigated dusts collected from homes in the Russian Karelia and in the Finnish Karelia. I found that bacterial 16S rRNA genes cloned from the Russian Karelian dusts (10 homes, 234 clones) predominantly represented Gram-positive taxa (the phyla Actinobacteria and Firmicutes, 67%). The Russian Karelian dusts contained nine-fold more of muramic acid (60 to 70 ng mg-1) than the Finnish Karelian dusts (3 to 11 ng mg-1). Among the DNA clones isolated from the Finnish side (n=231), Gram-negative taxa (40%) outnumbered the Gram-positives (34%). Out of the 465 DNA clones isolated from the Karelian dusts, 242 were assigned to cultured validly described bacterial species. In Russian Karelia, animal-associated species e.g. Staphylococcus and Macrococcus were numerous (27 clones, 14 unique species). This finding may connect to the difference in the prevalence of allergy, as childhood contacts with pets and farm animals have been connected with low allergy risk. Plant-associated bacteria and plant-borne 16S rRNA genes (chloroplast) were frequent among the DNA clones isolated from the Finnish Karelia, indicating components originating from plants. In conclusion, my work revealed three major differences between the bacterial communtites in the Russian and in the Finnish Karelian homes: (i) the high prevalence of Gram-positive bacteria on the Russian side and of Gram-negative bacteria on the Finnish side and (ii) the rich presence of animal-associated bacteria on the Russian side whereas (iii) plant-associated bacteria prevailed on the Finnish side. One or several of these factors may connect to the differences in the prevalence of allergy.
Resumo:
From bark bread to pizza - Food and exceptional circumstances: reactions of Finnish society to crises of food supply This study on the food supply under exceptional circumstances lies within the nutritional, historical and social sciences. The perspective and questions come under nutrition science, but are part of social decision-making. The study focuses on the first and second world wars as well as on contemporary society at the beginning of the 21st century. The main purpose of this study is to explore how Finnish society has responded to crises and what measures it has taken to sustain institutional food services and the food supply of households. The particular study interests include the school catering and food services in hospitals during the world wars. The situation in households is reflected in the counseling work carried out by state-run or civic organisations. Interest also focuses on the action of the scientific community. The decisions made in Finland are projected onto the solutions developed in some other European countries. The study is based primarily on the archive documents and annual reports prepared by food and health care authorities. Major source materials include scientific and professional publications. The evaluation of the situation in contemporary Finnish society is based on corresponding emergency plans and guidelines. The written material is supplemented by discussions with experts. Food rationing during the WWI and WWII differed in extent, details and unity. The food intake of some population groups was occasionally inadequate both in quantity, quality and safety. The counseling of the public focused on promoting self-sufficiency, improving cooking skills and widening food habits. One of the most vulnerable groups in regard to nutrition was long-term patients in institutions. As for future development, the world wars were never-theless important periods for public food services and counseling practices. WWII was also an important period for product development in the food industry. Significant work on food substitutes was carried out by Professor Carl Tigerstedt during WWI. The research of Professors A. I. Virtanen and Paavo Simola during WWII focused on vitamins. Crises threatening societies now differ from those faced a hundred years ago. Finland is bet-ter prepared, but in many ways more vulnerable to and dependent on other actors. Food rationing is a severe means of handling the scarcity of food, which is why contemporary society relies primarily on preparedness planning. Civic organisations played a key role during the world wars, and establishing an emergency food supply remains on their agenda. Although the objective of protecting the population remains the same for nutrition, food production, and food consumption, threat scenarios and the knowledge and skill levels of citizens are constantly changing. Continuous monitoring and evaluation is therefore needed.
Resumo:
This report is the result of a small-scale experiment looking at improving methods for evaluating environmental laws. The objective in this research was to evaluate the effectiveness of the precautionary principle – an accepted principle of international environmental law – in the context of Australia’s endangered species. Two case studies were selected by our team: the (Great) White Shark and an endangered native Australian plant known as Tylophora Linearis.
Resumo:
Digital image
Resumo:
Introduction The last half-century of epidemiological enquiry into schizophrenia can be characterized by the search for neurological imbalances and lesions for genetic factors. The growing consensus is that these directions have failed, and there is now a growing interest in psychosocial and developmental models. Another area of recent interest is in epigenetics – the multiplication of genetic influences by environmental factors. Methods This integrative review comparatively maps current psychosocial, developmental and epigenetic models for schizophrenia epidemiology to identify crossover and theoretical gaps. Results In the flood of data that is being produced around the schizophrenia epidemiology, one of the most consistent findings is that schizophrenia is an urban syndrome. Once demographic factors have been discounted, between one-quarter and one-third of all incidence is repeatedly traced back to urbanicity – potentially threatening more established models, such as the psychosocial, genetic and developmental hypotheses. Conclusions Close analysis demonstrates how current models for schizophrenia epidemiology appear to miss the mark. Furthermore, the built environment appears to be an inextricable factor in all current models and indeed may be a valid epidemiological factor on its own. The reason the built environment hasn’t already become a de rigueur area of epidemiological research is possibly trivial – it just doesn’t attract enough science, and lacks a hero to promote it alongside other hypotheses.
Resumo:
The purpose of this study is to analyse the development and understanding of the idea of consensus in bilateral dialogues among Anglicans, Lutherans and Roman Catholics. The source material consists of representative dialogue documents from the international, regional and national dialogues from the 1960s until 2006. In general, the dialogue documents argue for agreement/consensus based on commonality or compatibility. Each of the three dialogue processes has specific characteristics and formulates its argument in a unique way. The Lutheran-Roman Catholic dialogue has a particular interest in hermeneutical questions. In the early phases, the documents endeavoured to describe the interpretative principles that would allow the churches to together proclaim the Gospel and to identify the foundation on which the agreement in the church is based. This investigation ended up proposing a notion of basic consensus , which later developed into a form of consensus that seeks to embrace, not to dismiss differences (so-called differentiated consensus ). The Lutheran-Roman Catholic agreement is based on a perspectival understanding of doctrine. The Anglican-Roman Catholic dialogue emphasises the correctness of interpretations. The documents consciously look towards a common future , not the separated past. The dialogue s primary interpretative concept is koinonia. The texts develop a hermeneutics of authoritative teaching that has been described as the rule of communion . The Anglican-Lutheran dialogue is characterised by an instrumental understanding of doctrine. Doctrinal agreement is facilitated by the ideas of coherence, continuity and substantial emphasis in doctrine. The Anglican-Lutheran dialogue proposes a form of sufficient consensus that considers a wide set of doctrinal statements and liturgical practices to determine whether an agreement has been reached to the degree that, although not complete , is sufficient for concrete steps towards unity. Chapter V discusses the current challenges of consensus as an ecumenically viable concept. In this part, I argue that the acceptability of consensus as an ecumenical goal is based not only the understanding of the church but more importantly on the understanding of the nature and function of the doctrine. The understanding of doctrine has undergone significant changes during the time of the ecumenical dialogues. The major shift has been from a modern paradigm towards a postmodern paradigm. I conclude with proposals towards a way to construct a form of consensus that would survive philosophical criticism, would be theologically valid and ecumenically acceptable.
Resumo:
In this paper I examine how one political actor–former Prime Minister Kevin Rudd–proposes to use education for the purpose of securing national productivity and foreign policy. I work with Foucault’s suggestion that the apparatus of security is the essential technical instrument of governmentality and that the production of milieu, made up of human, spatial, temporal and cultural objects, and the government of risk are key strategies in the bio-politicisation of security. The discourse analysis also draws on Bacchi to problematise statements that (a) represent both the nation and regional neighbours as governable milieu within the ambit of a whole of government approach, and (b) locate literacy and education as both risk and solution in a security apparatus. My examination of the emergence of literacy and education as security technologies, takes account of the discursive effects of Rudd’s representation of the spaces and scale of national, geopolitical and global policy problems. I argue that in these examples of policy texts, education is used as a discursive tool to secure education workers and youth as subjects of economic interest and sovereign rule.
Resumo:
Background Australia has one of the highest rates of antibiotic use amongst OECD countries. Data from the Australian primary healthcare sector suggests unnecessary antibiotics were prescribed for self-resolving conditions. We need to better understand what drives general practitioners (GPs) to prescribe antibiotics, consumers to seek antibiotics, and pharmacists to fill repeat antibiotic prescriptions. It is also not clear how these individuals trade-off between the possible benefits that antibiotics may provide in the immediate/short term, against the longer term societal risk of antimicrobial resistance. This project investigates what factors drive decisions to use antibiotics for GPs, pharmacists and consumers, and how these individuals discount the future. Methods Factors will be gleaned from published literature and from semi-structured interviews, to inform the development of Discrete Choice Experiments (DCEs). Three DCEs will be constructed – one for each group of interest – to allow investigation of which factors are more important in influencing (a) GPs to prescribe antibiotics, (b) consumers to seek antibiotics, and (c) pharmacists to fill legally valid but old or repeat prescriptions of antibiotics. Regression analysis will be conducted to understand the relative importance of these factors. A Time Trade Off exercise will be developed to investigate how these individuals discount the future. Results Findings from the DCEs will provide an insight into which factors are more important in driving decision making in antibiotic use for GPs, pharmacists and consumers. Findings from the Time Trade Off exercise will show what individuals are willing to trade for preserving the miracle of antibiotics. Conclusion Research findings will contribute to existing national programs to bring about a reduction in inappropriate use of antibiotic in Australia. Specifically, influencing how key messages and public health campaigns are crafted, and clinical education and empowerment of GPs and pharmacists to play a more responsive role as stewards of antibiotic use in the community.
Resumo:
We find evidence that U.S. auditors increased their attention to fraud detection during or immediately after the economic contractions of the 20th century, based on a content analysis of the 12 volumes of the 20th-century auditing reference series Montgomery’s Auditing. Contractions, however, do not seem to have affected auditors’ attention to the formal goal of fraud detection. The study suggests that auditors’ aversion to the heightened risks of fraud during economic downturns leads them to focus more on fraud detection at those times regardless of the particular guidance in formal audit standards. This study is the first to find some evidence of a recession-influenced difference between fraud detection practices and formal fraud detection goals.
Resumo:
Spirometry is the most widely used lung function test in the world. It is fundamental in diagnostic and functional evaluation of various pulmonary diseases. In the studies described in this thesis, the spirometric assessment of reversibility of bronchial obstruction, its determinants, and variation features are described in a general population sample from Helsinki, Finland. This study is a part of the FinEsS study, which is a collaborative study of clinical epidemiology of respiratory health between Finland (Fin), Estonia (Es), and Sweden (S). Asthma and chronic obstructive pulmonary disease (COPD) constitute the two major obstructive airways diseases. The prevalence of asthma has increased, with around 6% of the population in Helsinki reporting physician-diagnosed asthma. The main cause of COPD is smoking with changes in smoking habits in the population affecting its prevalence with a delay. Whereas airway obstruction in asthma is by definition reversible, COPD is characterized by fixed obstruction. Cough and sputum production, the first symptoms of COPD, are often misinterpreted for smokers cough and not recognized as first signs of a chronic illness. Therefore COPD is widely underdiagnosed. More extensive use of spirometry in primary care is advocated to focus smoking cessation interventions on populations at risk. The use of forced expiratory volume in six seconds (FEV6) instead of forced vital capacity (FVC) has been suggested to enable office spirometry to be used in earlier detection of airflow limitation. Despite being a widely accepted standard method of assessment of lung function, the methodology and interpretation of spirometry are constantly developing. In 2005, the ATS/ERS Task Force issued a joint statement which endorsed the 12% and 200 ml thresholds for significant change in forced expiratory volume in one second (FEV1) or FVC during bronchodilation testing, but included the notion that in cases where only FVC improves it should be verified that this is not caused by a longer exhalation time in post-bronchodilator spirometry. This elicited new interest in the assessment of forced expiratory time (FET), a spirometric variable not usually reported or used in assessment. In this population sample, we examined FET and found it to be on average 10.7 (SD 4.3) s and to increase with ageing and airflow limitation in spirometry. The intrasession repeatability of FET was the poorest of the spirometric variables assessed. Based on the intrasession repeatability, a limit for significant change of 3 s was suggested for FET during bronchodilation testing. FEV6 was found to perform equally well as FVC in the population and in a subgroup of subjects with airways obstruction. In the bronchodilation test, decreases were frequently observed in FEV1 and particularly in FVC. The limit of significant increase based on the 95th percentile of the population sample was 9% for FEV1 and 6% for FEV6 and FVC; these are slightly lower than the current limits for single bronchodilation tests (ATS/ERS guidelines). FEV6 was proven as a valid alternative to FVC also in the bronchodilation test and would remove the need to control duration of exhalation during the spirometric bronchodilation test.
Resumo:
Purpose In the oncology population where malnutrition prevalence is high, more descriptive screening tools can provide further information to assist triaging and capture acute change. The Patient-Generated Subjective Global Assessment Short Form (PG-SGA SF) is a component of a nutritional assessment tool which could be used for descriptive nutrition screening. The purpose of this study was to conduct a secondary analysis of nutrition screening and assessment data to identify the most relevant information contributing to the PG-SGA SF to identify malnutrition risk with high sensitivity and specificity. Methods This was an observational, cross-sectional study of 300 consecutive adult patients receiving ambulatory anti-cancer treatment at an Australian tertiary hospital. Anthropometric and patient descriptive data were collected. The scored PG-SGA generated a score for nutritional risk (PG-SGA SF) and a global rating for nutrition status. Receiver operating characteristic curves (ROC) were generated to determine optimal cut-off scores for combinations of the PG-SGA SF boxes with the greatest sensitivity and specificity for predicting malnutrition according to scored PG-SGA global rating. Results The additive scores of boxes 1–3 had the highest sensitivity (90.2 %) while maintaining satisfactory specificity (67.5 %) and demonstrating high diagnostic value (AUC = 0.85, 95 % CI = 0.81–0.89). The inclusion of box 4 (PG-SGA SF) did not add further value as a screening tool (AUC = 0.85, 95 % CI = 0.80–0.89; sensitivity 80.4 %; specificity 72.3 %). Conclusions The validity of the PG-SGA SF in chemotherapy outpatients was confirmed. The present study however demonstrated that the functional capacity question (box 4) does not improve the overall discriminatory value of the PG-SGA SF.