880 resultados para did you know
Resumo:
Matrix metalloproteinase (MMP) -8, collagenase-2, is a key mediator of irreversible tissue destruction in chronic periodontitis and detectable in gingival crevicular fluid (GCF). MMP-8 mostly originates from neutrophil leukocytes, the first line of defence cells which exist abundantly in GCF, especially in inflammation. MMP-8 is capable of degrading almost all extra-cellular matrix and basement membrane components and is especially efficient against type I collagen. Thus the expression of MMP-8 in GCF could be valuable in monitoring the activity of periodontitis and possibly offers a diagnostic means to predict progression of periodontitis. In this study the value of MMP-8 detection from GCF in monitoring of periodontal health and disease was evaluated with special reference to its ability to differentiate periodontal health and different disease states of the periodontium and to recognise the progression of periodontitis, i.e. active sites. For chair-side detection of MMP-8 from the GCF or peri-implant sulcus fluid (PISF) samples, a dip-stick test based on immunochromatography involving two monoclonal antibodies was developed. The immunoassay for the detection of MMP-8 from GCF was found to be more suitable for monitoring of periodontitis than detection of GCF elastase concentration or activity. Periodontally healthy subjects and individuals suffering of gingivitis or of periodontitis could be differentiated by means of GCF MMP-8 levels and dipstick testing when the positive threshold value of the MMP-8 chair-side test was set at 1000 µg/l. MMP-8 dipstick test results from periodontally healthy and from subjects with gingivitis were mainly negative while periodontitis patients sites with deep pockets ( 5 mm) and which were bleeding on probing were most often test positive. Periodontitis patients GCF MMP-8 levels decreased with hygiene phase periodontal treatment (scaling and root planing, SRP) and even reduced during the three month maintenance phase. A decrease in GCF MMP-8 levels could be monitored with the MMP-8 test. Agreement between the test stick and the quantitative assay was very good (κ = 0.81) and the test provided a baseline sensitivity of 0.83 and specificity of 0.96. During the 12-month longitudinal maintenance phase, periodontitis patients progressing sites (sites with an increase in attachment loss ≥ 2 mm during the maintenance phase) had elevated GCF MMP-8 levels compared with stable sites. General mean MMP-8 concentrations in smokers (S) sites were lower than in non-smokers (NS) sites but in progressing S and NS sites concentrations were at an equal level. Sites with exceptionally and repeatedly elevated MMP-8 concentrations during the maintenance phase were clustered in smoking patients with poor response to SRP (refractory patients). These sites especially were identified by the MMP-8 test. Subgingival plaque samples from periodontitis patients deep periodontal pockets were examined by polymerase chain reaction (PCR) to find out if periodontal lesions may serve as a niche for Chlamydia pneumoniae. Findings were compared with the clinical periodontal parameters and GCF MMP-8 levels to determine the correlation with periodontal status. Traces of C. pneumoniae were identified from one periodontitis patient s pooled subgingival plaque sample by means of PCR. After periodontal treatment (SRP) the sample was negative for C. pneumoniae. Clinical parameters or biomarkers (MMP-8) of the patient with the positive C. pneumoniae finding did not differ from other study patients. In this study it was concluded that MMP-8 concentrations in GCF of sites from periodontally healthy individuals, subjects with gingivitis or with periodontitis are at different levels. The cut-off value of the developed MMP-8 test is at an optimal level to differentiate between these conditions and can possibly be utilised in identification of individuals at the risk of the transition of gingivitis to periodontitis. In periodontitis patients, repeatedly elevated GCF MMP-8 concentrations may indicate sites at risk of progression of periodontitis as well as patients with poor response to conventional periodontal treatment (SRP). This can be monitored by MMP-8 testing. Despite the lower mean GCF MMP-8 concentrations in smokers, a fraction of smokers sites expressed very high MMP-8 concentrations together with enhanced periodontal activity and could be identified with MMP-8 specific chair-side test. Deep periodontal lesions may be niches for non-periodontopathogenic micro-organisms with systemic effects like C. pneumoniae and possibly play a role in the transmission from one subject to another.
Resumo:
This study aimed at elucidating real-life aspects of restorative treatment practices. In addition, dentists' views and perceptions of and variation in restorative treatment practices with respect to dentist-related factors were evaluated. Reasons for placement and replacement of restoration, material selection, posterior restoration longevity, and the use of local anesthesia were assessed on two cross-sectional data sets. Data from the Helsinki Public Dental Service (PDS) included details on 3057 restorations performed by dentists (n=134) during routine clinical work in 2001. The other PDS data from Vantaa were based on 205 patient records of young adults containing information on 1969 restorations investigated retrospectively from 1994-1996 backwards; 51 dentists performed the restorations. In addition, dentists’ self-reported use of local anesthesia and estimates of restoration longevity were investigated by means of a nationwide questionnaire sent to 592 general dental practitioners selected by systematic sampling from the membership list of the Finnish Dental Association in 2004. All data sets included some background information on dentists such as gender, year of birth or graduation, and working sector. In PDS in 2001, primary caries was the reason for placement of restoration more often among patients aged under 19 years than among older patients (p<0.001). Among patients over 36 years of age, replacements represented the majority. Regarding dentist-related factors, replacements of restorations were made by younger dentists more frequently than by older dentists (p<0.001). In PDS in 1994-1996, the replacement rate of posterior restorations was greater among female dentists than among male dentists (p=0.01), especially for amalgams (p=0.008). The mean age of replaced posterior restoration among young adults was 8.9 (SD 5.2) years for amalgam and 2.4 (SD 1.4) years for tooth-colored restorations, the actual replacement rate for all existing posterior restorations being 7% in PDS in 1994-1996. Of all restorative materials used, a clear majority (69%) were composites in PDS in 2001. Local anesthesia was used in 48% of cases and more frequently for older patients (55%) than for patients aged under 13 years (35%) (p<0.001). Younger dentists more often used local anesthesia for primary restoration than did the older dentists (p<0.001), especially for primary teeth (p=0.005). Working sector had an impact on dentists’ self-reported use of local anesthesia and estimates of restoration longevity; public sector dentists reported using local anesthesia more frequently than private sector dentists for Class II (p=0.04) and for Class III restorations (p=0.01). Private sector dentists gave longer estimates of posterior composite longevity than public sector dentists (p=0.001). In conclusion, restorative treatment practices seem to vary according to patient age and also dentist-related factors. Replacements of restorations are common for adults. For children, clear underuse of local anesthesia prevails.
Resumo:
The aim of the present study was to assess dental health and its determinants among 15-year-olds in Tehran, Iran and to evaluate the impact of a school-based educational intervention on their oral cleanliness and gingival health. The total sample comprised 506 students. Data collection was performed through a clinical dental examination and a self-administered structured questionnaire. This questionnaire covered the student s background information, socio-economic status, self-perceived dental health, tooth-brushing, and smoking. The clinical dental examination covered caries experience, gingival status, dental plaque status, and orthodontic treatment needs. Participation was voluntary, and all students responded to the questionnaire. Only three students refused the clinical dental examination. The intervention was based on exposing students to dental health education through a leaflet and a videotape designed for the present study. The outcome examinations took place 12 weeks after the baseline among the three groups of the intervention trial (leaflet, videotape, and control). High participation rates at the baseline and scanty drop-outs (7%) in the intervention speak for reliability of the results. Mean value of the DMFT (D=decayed, M=missing, and F=filled teeth) index of the 15-year-olds was 2.1, which comprised DT=0.9, MT=0.2, and FT=1.0 with no gender differences. Dental plaque existed on at least one index tooth of all students, and healthy periodontium (Community Periodontal Index=0) was found in less than 10% of students. Need for caries treatment existed in 40% of students, for scaling in 24%, for oral hygiene instructions in all, and for orthodontic treatment in 26%. Students with the highest level of parents education had fewer dental caries (36% vs. 48%) and less dental plaque (77% vs. 88%). Of all students, 78% assessed their dental health as good or better. Even more of those with their DMFT=0 (73% vs. 27%) and DT=0 (68% vs. 32%) assessed their dental health as good or better. Smokers comprised 5% of the boys and 2% of the girls. Smoking was common among students of less-educated parents (6% vs. 3%). Of all students, 26% reported twice-daily tooth-brushing; girls (38% vs. 15%) and those of higher socio-economic background (33% vs. 17%) did so more frequently. The best predictors for a good level of oral cleanliness were female gender or twice-daily tooth-brushing. The present study demonstrated that a school-based educational intervention can be effective in the short term in improving the oral cleanliness and gingival health of adolescents. At least 50% reduction in numbers of teeth with dental plaque compared to baseline was achieved by 58% of the students in the leaflet group, by 37% in the videotape group, and by 10% of the controls. Corresponding figures for gingival bleeding were 72%, 64%, and 30%. For improving the oral cleanliness and gingival health of adolescents in countries such as Iran with a developing oral health system, school-based educational intervention should be established with focus on oral self-care and oral health education messages. Emphasizing the immediate gains from good oral hygiene, such as fresh breath, clean teeth, and attractive appearance should be key aspects for motivating these adolescents to learn and maintain good dental health, whilst in planning school-based dental health intervention, special attention should be given to boys and those with lower socio-economic status. Author s address: Reza Yazdani, Department of Oral Public Health, Institute of Dentistry, University of Helsinki, P.O. Box 41, FI-00014 Helsinki, Finland. E-mail: reza.yazdani@helsinki.fi
Resumo:
Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.
Resumo:
Background: The incidence of sexually transmitted infections (STIs) in most EU states has gradually increased and the rate of newly diagnosed HIV cases has doubled since 1999. STIs differ in their clinical features, prognosis and transmission dynamics, though they do share a common factor in their mode of transmission −that is, human behaviour. The evolvement of STI epidemiology involves a joint action of biological, epidemiological and societal factors. Of the more immediate factors, besides timely diagnosis and appropriate treatment, STI incidence is influenced by population patterns of sexual risk behaviour, particularly the number of sexual partners and the frequency of unprotected intercourse. Assessment of sexual behaviour, its sociodemographic determinants and time-trends are important in understanding the distribution and dynamic of STI epidemiology. Additionally, in the light of the basic structural determinants, such as increased level of migration, changes in gender dynamics and impacts from globalization, with its increasing alignment of values and beliefs, can reveal future challenges related to STI epidemiology. STI case surveillance together with surveillance on sexual behaviour can guide the identification of preventive strategies, assess their effectiveness and predict emerging trends. The objective of this study was to provide base line data on sexual risk behaviour, self-reported STIs and their patterns by sociodemographic factors as well as associations of sexual risk behaviour with substance use among young men in Finland and Estonia. In Finland national population based data on adult men s sexual behaviour is limited. The findings are discussed in the context of STI epidemiology as well as their possible implications for public health policies and prevention strategies. Materials and Methods: Data from three different cross-sectional population-based surveys conducted in Finland and Estonia, during 1998 2005, were used. Sexual behaviour- and health-related questions were incorporated in two surveys in Finland; the Health 2000, a large scale general health survey, focussed on young adults, and the Military health behavioural survey on military conscripts participating in the mandatory military training. Through research collaboration with Estonia, similar questions to the Finnish surveys were introduced to the second Estonian HIV/AIDS survey, which was targeted at young adults. All surveys applied mail-returned, anonymous, self-administered questionnaires with multiple choice formatted answers. Results: In Finland, differences in sexual behaviour between young men and women were minor. An age-stratified analysis revealed that the sex-related difference observed in the youngest age group (18 19 years) levelled off in the age group 20 24 and almost disappeared among those aged 25 29. Marital status was the most important sociodemographic correlate for sexual behaviour for both sexes, singles reporting higher numbers of lifetime-partners and condom use. This effect was stronger for women than for men. However, of those who had sex with casual partners, 15% were married or co-habiting, with no difference between male and female respondents. According to the Military health behavioural survey, young men s sexual risk behaviour in Finland did not markedly change over a period of time between 1998 and 2005. Approximately 30−40% of young men had had multiple sex partners (more than five) in their lifetime, over 20% reported having had multiple sex partners (at least three) over the past year and 50% did not use a condom in their last sexual intercourse. Some 10% of men reported accumulation of risk factors, i.e. having had both, multiple sex partners and not used a condom in their last intercourse, over the past year of the survey. When differences and similarities were viewed within Finland and Estonia, a clear sociodemographic patterning of sexual risk behaviour and self-reported STIs was found in Finland, but a somewhat less consistent trend in Estonia. Generally, both, alcohol and drug use were strong correlates for sexual risk behaviour and self-reported STIs in Finland and Estonia, having a greater effect on engagement with multiple sex partners rather than unprotected intercourse or self-reported STIs. In Finland alcohol use, relative to drug use, was a stronger predictor of sexual risk behaviour and self-reported STIs, while in Estonia drug use predicted sexual risk behaviour and self-reported STIs stronger than alcohol use. Conclusions: The study results point to the importance for prevention of sexual risk behaviour, particularly strategies that integrate sexual risk with alcohol and drug use risks. The results point to the need to focus further research on sexual behaviour and STIs among young people; on tracking trends among general population as well as applying in-depth research to identify and learn from vulnerable and high-risk population groups for STIs who are exposed to a combination of risk factors.
Resumo:
Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.
Resumo:
This study concerns the implementation of steering by contracting in health care units and in the work of the doctors employed by them. The study analyses how contracting as a process is being implemented in hospital district units, health centres and in the work of their doctors, as well as how these units carry out their operations and patient care within the restrictions set by the contracts. Based on interviews with doctors, the study analyses the realisation of operations within the units from the doctors perspective and through their work. The key result of the study is that the steering impact of contracting was not felt at the level of practical work. The contracting was implemented by assigning the related tasks to management only. The management implemented the contract by managing their resources rather than by intervening in doctors activities or the content of their tasks. The steering did not extend to improving practical care processes. This allowed the unchanged continuation of core operations in an autonomous manner and in part, protected from the impacts of contracting. In health centres, the contract concluded was viewed as merely steering the operations of the hospital district and its implementation did not receive the support of the centres. The fact that primary health care and specialised health care constitute separate contracting parties had adverse effects on the contract s implementation and the integration of care. A theoretical review unveiled several reasons for the failure of steering by contracting to alter operations within units. These included the perception steering by contracting as a weak change incentive. The doctors shunned the introduction of an economic logic and ideology into health care and viewed steering by contracting as a hindrance to delivering care to patients and a disturbance to their work and patient relationships. Contracting caused tensions between representatives of the financial administration and health care professionals. It also caused internal tensions, while it had varying impacts on different specialities, including the introduction of varying potential to influence contracts. Most factors preventing the realisation of the steering objective could have been ameliorated through positive leadership. There is a need to bridge the gap between financial steering and patient work. Key measures include encouraging the commitment of middle management, supporting leadership expertise and identifying the right methods of contributing to a mutual understanding between the cultures of financing, administration and health care. Criticism of the purchasers expertise and the view that undersized orders are due to the purchaser s financial difficulties underlines the importance of the purchaser s size. Overly detailed, product-based contracts seemed to place the focus on the quantities and costs of services rather than health impacts and efficiency of operations. Bundling contracts into larger service packages would encourage the enhancement of operations. Steering by contracting represents unexploited potential: it could function as a forum for integrated regional planning of services, and the prioritisation and integration of care, and offer an opportunity and an incentive for developing core operations.
Resumo:
In line with major demographic changes in other Northern European and North American countries and Australia, being nonmarried is becoming increasingly common in Finland, and the proportion of cohabiters and of persons living alone has grown in recent decades. Official marital status no longer reflects an individual s living arrangement, as single, divorced and widowed persons may live alone, with a partner, with children, with parents, with siblings, or with unrelated persons. Thus, more than official marital status, living arrangements may be a stronger discriminator of one s social bonds and health. The general purpose of this study was to deepen our current understanding of the magnitude, trends, and determinants of ill health by living arrangements in the Finnish working-age population. Distinct measures of different dimensions of poor health, as well as an array of associated factors, provided a comprehensive picture of health differences by living arrangements and helped to assess the role of other factors in the interpretation of these differences . Mortality analyses were based on Finnish census records at the end of 1995 linked with cause-of-death registers for 1996 2000. The data included all persons aged 30 and over. Morbidity analyses were based on two comparable cross-sectional studies conducted twenty years apart (the Mini-Finland Survey in 1978 80 and the Health 2000 Survey in 2000 01). Both surveys were based on nationally representative samples of Finns aged 30 and over, and benefited from high participation rates. With the exception of mortality analyses, this study focused on health differences among the working-age population (mortality in age groups 30-64 and 65 and over, self-rated health and mental health in the age group 30-64, and unhealthy alcohol use in the age group 30-54). Compared with all nonmarried groups, married men and women exhibited the best health in terms of mortality, self-rated health, mental health and unhealthy alcohol use. Cohabiters did not differ from married persons in terms of self-rated health or mental health, but did exhibit excess unhealthy alcohol use and high mortality, particularly from alcohol-related causes. Compared with the married, persons living alone or with someone other than a partner exhibited elevated mortality as well as excess poor mental health and unhealthy alcohol use. By all measures of health, men and women living alone tended to be in the worst position. Over the past twenty years, SRH had improved least among single men and women and widowed women, and most among cohabiting women. The association between living arrangements and health has many possible explanations. The health-related selection theory suggests that healthy people are more likely to enter and maintain a marriage or a consensual union than those who are unhealthy (direct selection) or that a variety of health-damaging behavioural and social factors increase the likelihood of ill health and the probability of remaining without a partner or becoming separated from one s partner (indirect selection). According to the social causation theory, marriage or cohabitation has a health-promoting effect, whereas living alone or with others than a partner has a detrimental effect on health. In this study, the role of other factors that are mainly assumed to reflect selection, appeared to be rather modest. Social support, which reflects social causation, contributed only modestly to differences in unhealthy alcohol use by living arrangements, but had a larger effect on differences in poor mental health. Socioeconomic factors and health-related behaviour, which reflect both selection and causation, appeared to play a more important role in the excess poor health of cohabiters and of persons living alone or with someone other than a partner, than of married persons. Living arrangements were strongly connected to various dimensions of ill health. In particular, alcohol consumption appeared to be of great importance in the association between living arrangements and health. To the extent that the proportion of nonmarried persons continues to grow and their health does not improve at the same rate as that of married persons, the challenges that currently nonmarried persons pose to public health will likely increase.
Resumo:
Reverse cholesterol transport (RCT) is an important function of high-density lipoproteins (HDL) in the protection of atherosclerosis. RCT is the process by which HDL stimulates cholesterol removal from peripheral cells and transports it to the liver for excretion. Premenopausal women have a reduced risk for atherosclerosis compared to age-matched men and there exists a positive correlation for serum 17β-estradiol (E2) and HDL levels in premenopausal women supporting the role of E2 in atherosclerosis prevention. In premenopausal women, E2 associates with HDL as E2 fatty acyl esters. Discovery of the cellular targets, metabolism, and assessment of the macrophage cholesterol efflux potential of these HDL-associated E2 fatty acyl esters were the major objectives of this thesis (study I, III, and IV). Soy phytoestrogens, which are related to E2 in both structure and function, have been proposed to be protective against atherosclerosis but the evidence to support these claims is conflicting. Therefore, another objective of this thesis was to assess the ability of serum from postmenopausal women, treated with isoflavone supplements (compared to placebo), to promote macrophage cholesterol efflux (study II). The scope of this thesis was to cover the roles that HDL-associated E2 fatty acyl esters have in the cellular aspects of RCT and to determine if soy isoflavones can also influence RCT mechanisms. SR-BI was a pivotal cellular receptor, responsible for hepatic and macrophage uptake and macrophage cholesterol efflux potential of HDL-associated E2 fatty acyl esters. Functional SR-BI was also critical for proper LCAT esterification activity which could impact HDL-associated E2 fatty acyl ester assembly and its function. In hepatic cells, LDL receptors also contributed to HDL-associated E2 fatty acyl esters uptake and in macrophage cells, estrogen receptors (ERs) were necessary for both HDL-associated E2 ester-specific uptake and cholesterol efflux potential. HDL-containing E2 fatty acyl esters (E2-FAE) stimulated enhanced cholesterol efflux compared to male HDL (which are deficient in E2) demonstrating the importance of the E2 ester in this process. To support this, premenopausal female HDL, which naturally contains E2, showed greater macrophage cholesterol efflux compared to males. Additionally, hepatic and macrophage cells hydrolyzed the HDL-associated E2 fatty acyl ester into unesterified E2. This could have important biological ramifications because E2, not the esterified form, has potent cellular effects which may influence RCT mechanisms. Lastly, soy isoflavone supplementation in postmenopausal women did not modulate ABCA1-specific macrophage cholesterol efflux but did increase production of plasma pre-β HDL levels, a subclass of HDL. Therefore, the impact of isoflavones on RCT and cardiovascular health needs to be further investigated. Taken as a whole, HDL-associated E2 fatty acyl esters from premenopausal women and soy phytoestrogen treatment in postmenopausal women may be important factors that increase the efficiency of RCT through cellular lipoprotein-related processes and may have direct implications on the cardiovascular health of women.
Resumo:
The aim of this dissertation was to examine the determinants of severe back disorders leading to hospital admission in Finland. First, back-related hospitalisations were considered from the perspective of socioeconomic status, occupation, and industry. Secondly, the significance of psychosocial factors at work, sleep disturbances, and such lifestyle factors as smoking and overweight was studied as predictors of hospitalisation due to back disorders. Two sets of data were used: 1) the population-based data comprised all occupationally active Finns aged 25-64, and included hospitalisations due to back disorders in 1996 and 2) a cohort of employees followed up from 1973 to 2000 having been hospitalised due to back disorders. The results of the population-based study showed that people in physically strenuous industries and occupations, such as agriculture and manufacturing, were at an increased risk of being hospitalised for back disorders. The lowest hospitalisation rates were found in sedentary occupations. Occupational class and the level of formal education were independently associated with hospitalisation for back disorders. This stratification was fairly consistent across age-groups and genders. Men had a slightly higher risk of becoming hospitalised compared with women, and the risk increased with age among both genders. The results of the prospective cohort study showed that psychosocial factors at work such as low job control and low supervisor support predicted subsequent hospitalisation for back disorders even when adjustments were made for occupational class and physical workload history. However, psychosocial factors did not predict hospital admissions due to intervertebral disc disorders; only admissions due to other back disorders. Smoking and overweight predicted, instead, only hospitalisation for intervertebral disc disorders. These results suggest that the etiological factors of disc disorders and other back disorders differ from each other. The study concerning the association of sleep disturbances and other distress symptoms with hospitalisation for back disorders revealed that sleep disturbances predicted subsequent hospitalisation for all back disorders after adjustment for chronic back disorders and recurrent back symptoms at baseline, as well as for work-related load and lifestyle factors. Other distress symptoms were not predictive of hospitalisation.
Resumo:
Although the principle of equal access to medically justified treatment has been promoted by official health policies in many Western health care systems, practices do not completely meet policy targets. Waiting times for elective surgery vary between patient groups and regions, and growing problems in the availability of services threaten equal access to treatment. Waiting times have come to the attention of decision-makers, and several policy initiatives have been introduced to ensure the availability of care within a reasonable time. In Finland, for example, the treatment guarantee came into force in 2005. However, no consensus exists on optimal waiting time for different patient groups. The purpose of this multi-centre randomized controlled trial was to analyse health-related quality of life, pain and physical function in total hip or knee replacement patients during the waiting time and to evaluate whether the waiting time is associated with patients health outcomes at admission. This study also assessed whether the length of waiting time is associated with social and health services utilization in patients awaiting total hip or knee replacement. In addition, patients health-related quality of life was compared with that of the general population. Consecutive patients with a need for a primary total hip or knee replacement due to osteoarthritis were placed on the waiting list between August 2002 and November 2003. Patients were randomly assigned to a short waiting time (maximum 3 months) or a non-fixed waiting time (waiting time not fixed in advance, instead the patient followed the hospitals routine practice). Patients health-related quality of life was measured upon being placed on the waiting list and again at hospital admission using the generic 15D instrument. Pain and physical function were evaluated using the self-report Harris Hip Score for hip patients and a scale modified from the Knee Society Clinical Rating System for knee patients. Utilization measures were the use of home health care, rehabilitation and social services, physician visits and inpatient care. Health and social services use was low in both waiting time groups. The most common services used while waiting were rehabilitation services and informal care, including unpaid care provided by relatives, neighbours and volunteers. Although patients suffered from clear restrictions in usual activities and physical functioning, they seemed primarily to lean on informal care and personal networks instead of professional care. While longer waiting time did not result in poorer health-related quality of life at admission and use of services during the waiting time was similar to that at the time of placement on the list, there is likely to be higher costs of waiting by people who wait longer simply because they are using services for a longer period. In economic terms, this would represent a negative impact of waiting. Only a few reports have been published of the health-related quality of life of patients awaiting total hip or knee replacement. These findings demonstrate that, in addition to physical dimensions of health, patients suffered from restrictions in psychological well-being such as depression, distress and reduced vitality. This raises the question of how to support patients who suffer from psychological distress during the waiting time and how to develop strategies to improve patients initiatives to reduce symptoms and the burden of waiting. Key words: waiting time, total hip replacement, total knee replacement, health-related quality of life, randomized controlled trial, outcome assessment, social service, utilization of health services
Resumo:
Clinical trials have shown that weight reduction with lifestyles can delay or prevent diabetes and reduce blood pressure. An appropriate definition of obesity using anthropometric measures is useful in predicting diabetes and hypertension at the population level. However, there is debate on which of the measures of obesity is best or most strongly associated with diabetes and hypertension and on what are the optimal cut-off values for body mass index (BMI) and waist circumference (WC) in this regard. The aims of the study were 1) to compare the strength of the association for undiagnosed or newly diagnosed diabetes (or hypertension) with anthropometric measures of obesity in people of Asian origin, 2) to detect ethnic differences in the association of undiagnosed diabetes with obesity, 3) to identify ethnic- and sex-specific change point values of BMI and WC for changes in the prevalence of diabetes and 4) to evaluate the ethnic-specific WC cutoff values proposed by the International Diabetes Federation (IDF) in 2005 for central obesity. The study population comprised 28 435 men and 35 198 women, ≥ 25 years of age, from 39 cohorts participating in the DECODA and DECODE studies, including 5 Asian Indian (n = 13 537), 3 Mauritian Indian (n = 4505) and Mauritian Creole (n = 1075), 8 Chinese (n =10 801), 1 Filipino (n = 3841), 7 Japanese (n = 7934), 1 Mongolian (n = 1991), and 14 European (n = 20 979) studies. The prevalence of diabetes, hypertension and central obesity was estimated, using descriptive statistics, and the differences were determined with the χ2 test. The odds ratios (ORs) or coefficients (from the logistic model) and hazard ratios (HRs, from the Cox model to interval censored data) for BMI, WC, waist-to-hip ratio (WHR), and waist-to-stature ratio (WSR) were estimated for diabetes and hypertension. The differences between BMI and WC, WHR or WSR were compared, applying paired homogeneity tests (Wald statistics with 1 df). Hierarchical three-level Bayesian change point analysis, adjusting for age, was applied to identify the most likely cut-off/change point values for BMI and WC in association with previously undiagnosed diabetes. The ORs for diabetes in men (women) with BMI, WC, WHR and WSR were 1.52 (1.59), 1.54 (1.70), 1.53 (1.50) and 1.62 (1.70), respectively and the corresponding ORs for hypertension were 1.68 (1.55), 1.66 (1.51), 1.45 (1.28) and 1.63 (1.50). For diabetes the OR for BMI did not differ from that for WC or WHR, but was lower than that for WSR (p = 0.001) in men while in women the ORs were higher for WC and WSR than for BMI (both p < 0.05). Hypertension was more strongly associated with BMI than with WHR in men (p < 0.001) and most strongly with BMI than with WHR (p < 0.001), WSR (p < 0.01) and WC (p < 0.05) in women. The HRs for incidence of diabetes and hypertension did not differ between BMI and the other three central obesity measures in Mauritian Indians and Mauritian Creoles during follow-ups of 5, 6 and 11 years. The prevalence of diabetes was highest in Asian Indians, lowest in Europeans and intermediate in others, given the same BMI or WC category. The coefficients for diabetes in BMI (kg/m2) were (men/women): 0.34/0.28, 0.41/0.43, 0.42/0.61, 0.36/0.59 and 0.33/0.49 for Asian Indian, Chinese, Japanese, Mauritian Indian and European (overall homogeneity test: p > 0.05 in men and p < 0.001 in women). Similar results were obtained in WC (cm). Asian Indian women had lower coefficients than women of other ethnicities. The change points for BMI were 29.5, 25.6, 24.0, 24.0 and 21.5 in men and 29.4, 25.2, 24.9, 25.3 and 22.5 (kg/m2) in women of European, Chinese, Mauritian Indian, Japanese, and Asian Indian descent. The change points for WC were 100, 85, 79 and 82 cm in men and 91, 82, 82 and 76 cm in women of European, Chinese, Mauritian Indian, and Asian Indian. The prevalence of central obesity using the 2005 IDF definition was higher in Japanese men but lower in Japanese women than in their Asian counterparts. The prevalence of central obesity was 52 times higher in Japanese men but 0.8 times lower in Japanese women compared to the National Cholesterol Education Programme definition. The findings suggest that both BMI and WC predicted diabetes and hypertension equally well in all ethnic groups. At the same BMI or WC level, the prevalence of diabetes was highest in Asian Indians, lowest in Europeans and intermediate in others. Ethnic- and sex-specific change points of BMI and WC should be considered in setting diagnostic criteria for obesity to detect undiagnosed or newly diagnosed diabetes.
Resumo:
The aim of this study was to deepen the understanding of eating disorders, body image dissatisfaction and related traits in males by examining the epidemiology and genetic epidemiology of these conditions in representative population-based twin samples. The sample of Study I included adolescent twins from FinnTwin12 cohorts born 1983 87 and assessed by a questionnaire at ages 14 y (N=2070 boys, N=2062 girls) and 17 y (N=1857 boys, N=1984 girls). Samples of Studies II-V consisted of young adult twins born 1974-79 from FinnTwin16 cohorts (Study II N=1245 men, Study III N=724 men, Study IV N=2122 men, Study V N=2426 women and N=1962 men), who were assessed by a questionnaire at the age 22-28 y. In addition, 49 men and 526 women were assessed by a diagnostic interview. The overall response rates for both twin cohorts in all studies were 80-90%. In boys, mainly genetic factors (82%, 95% confidence interval [CI] 72-92) explained the covariation of self-esteem between the ages 14 y and 17 y, whereas in girls, environmental factors (69%, 95% CI 43-93) were the largest contributors. Of young men, 30% experienced high muscle dissatisfaction, while 12% used or had used muscle building supplements and/or anabolic steroids on a regular basis. Muscle dissatisfaction exhibited a robust association with the indicators of mental distress and a genetic component (42%, 95% CI 23-59) for its liability in this population was found. The variation of muscle-building substance use was primarily explained by the environmental factors. The incidence rate of anorexia nervosa in males for the age of 10-24 y was 15.7 (95% CI 6.6-37.8) per 100 000 person-years, and its lifetime prevalence by the young adulthood was 0.24% (95% CI 0.03-0.44). All detected probands with anorexia nervosa had recovered from eating disorders, but suffered from substantial psychiatric comorbidity, which manifested also in their co-twins. Additionally, male co-twins of the probands displayed significant dissatisfaction with body musculature, a male-specific feature of body dysmorphic disorder. All probands were from twin pairs discordant for eating disorders. Of the five male probands with anorexia nervosa, only one was from an opposite-sex twin pair. Among women from the opposite-sex pairs, the prevalence of DSM-IV or broad anorexia nervosa was no significantly different compared to that of the women from monozygotic pairs or from dizygotic same-sex pairs. The prevalence of DSM-IV or broad bulimia nervosa did not differ in opposite- versus same-sex female twin individuals either. In both sexes, the overall profile of indicators on eating disorders was rather similar between individuals from opposite-sex and same-sex pairs. In adolescence, development of self-esteem was differently regulated in boys compared to girls: this finding may have far-reaching implications on the etiology of sex discrepancy of internalizing and externalizing disorders. In young men, muscle dissatisfaction and muscle building supplement/steroid use were relatively common. Muscle dissatisfaction was associated with marked psychological distress such as symptoms of depression and disordered eating. Both genetic and environmental factors explained muscle dissatisfaction in the population, but environmental factors appeared to best explain the use of muscle-building substances. In this study, anorexia nervosa in boys and young men from the general population was more common, transient and accompanied by more substantial co-morbidity than previously thought. Co-twins of the probands with anorexia nervosa displayed significant psychopathology such as male specific symptoms of body dysmorphic disorder, but none of them had had an eating disorder: taken together, these traits are suggestive for an endophenotype of anorexia nervosa in males. Little evidence was found on that the risk for anorexia nervosa, bulimia nervosa, disordered eating or body dissatisfaction were associated with twin zygosity. Thus, it is unlikely that in utero femininization, masculinization or postnatal socialization according to the sex of the co-twin have a major influence on the later development of eating disorders or related traits.
Resumo:
Various reasons, such as ethical issues in maintaining blood resources, growing costs, and strict requirements for safe blood, have increased the pressure for efficient use of resources in blood banking. The competence of blood establishments can be characterized by their ability to predict the volume of blood collection to be able to provide cellular blood components in a timely manner as dictated by hospital demand. The stochastically varying clinical need for platelets (PLTs) sets a specific challenge for balancing supply with requests. Labour has been proven a primary cost-driver and should be managed efficiently. International comparisons of blood banking could recognize inefficiencies and allow reallocation of resources. Seventeen blood centres from 10 countries in continental Europe, Great Britain, and Scandinavia participated in this study. The centres were national institutes (5), parts of the local Red Cross organisation (5), or integrated into university hospitals (7). This study focused on the departments of blood component preparation of the centres. The data were obtained retrospectively by computerized questionnaires completed via Internet for the years 2000-2002. The data were used in four original articles (numbered I through IV) that form the basis of this thesis. Non-parametric data envelopment analysis (DEA, II-IV) was applied to evaluate and compare the relative efficiency of blood component preparation. Several models were created using different input and output combinations. The focus of comparisons was on the technical efficiency (II-III) and the labour efficiency (I, IV). An empirical cost model was tested to evaluate the cost efficiency (IV). Purchasing power parities (PPP, IV) were used to adjust the costs of the working hours and to make the costs comparable among countries. The total annual number of whole blood (WB) collections varied from 8,880 to 290,352 in the centres (I). Significant variation was also observed in the annual volume of produced red blood cells (RBCs) and PLTs. The annual number of PLTs produced by any method varied from 2,788 to 104,622 units. In 2002, 73% of all PLTs were produced by the buffy coat (BC) method, 23% by aphaeresis and 4% by the platelet-rich plasma (PRP) method. The annual discard rate of PLTs varied from 3.9% to 31%. The mean discard rate (13%) remained in the same range throughout the study period and demonstrated similar levels and variation in 2003-2004 according to a specific follow-up question (14%, range 3.8%-24%). The annual PLT discard rates were, to some extent, associated with production volumes. The mean RBC discard rate was 4.5% (range 0.2%-7.7%). Technical efficiency showed marked variation (median 60%, range 41%-100%) among the centres (II). Compared to the efficient departments, the inefficient departments used excess labour resources (and probably) production equipment to produce RBCs and PLTs. Technical efficiency tended to be higher when the (theoretical) proportion of lost WB collections (total RBC+PLT loss) from all collections was low (III). The labour efficiency varied remarkably, from 25% to 100% (median 47%) when working hours were the only input (IV). Using the estimated total costs as the input (cost efficiency) revealed an even greater variation (13%-100%) and overall lower efficiency level compared to labour only as the input. In cost efficiency only, the savings potential (observed inefficiency) was more than 50% in 10 departments, whereas labour and cost savings potentials were both more than 50% in six departments. The association between department size and efficiency (scale efficiency) could not be verified statistically in the small sample. In conclusion, international evaluation of the technical efficiency in component preparation departments revealed remarkable variation. A suboptimal combination of manpower and production output levels was the major cause of inefficiency, and the efficiency did not directly relate to production volume. Evaluation of the reasons for discarding components may offer a novel approach to study efficiency. DEA was proven applicable in analyses including various factors as inputs and outputs. This study suggests that analytical models can be developed to serve as indicators of technical efficiency and promote improvements in the management of limited resources. The work also demonstrates the importance of integrating efficiency analysis into international comparisons of blood banking.
Resumo:
The aims of this dissertation were 1) to investigate associations of weight status of adolescents with leisure activities, and computer and cell phone use, and 2) to investigate environmental and genetic influences on body mass index (BMI) during adolescence. Finnish twins born in 1983–1987 responded to postal questionnaires at the ages of 11-12 (5184 participants), 14 (4643 participants), and 17 years (4168 participants). Information was obtained on weight and height, leisure activities including television viewing, video viewing, computer games, listening to music, board games, musical instrument playing, reading, arts, crafts, socializing, clubs, sports, and outdoor activities, as well as computer and cell phone use. Activity patterns were studied using latent class analysis. The relationship between leisure activities and weight status was investigated using logistic and linear regression. Genetic and environmental effects on BMI were studied using twin modeling. Of individual leisure activities, sports were associated with decreased overweight risk among boys in both cross-sectional and longitudinal analyses, but among girls only cross-sectionally. Many sedentary leisure activities, such as video viewing (boys/girls), arts (boys), listening to music (boys), crafts (girls), and board games (girls), had positive associations with being overweight. Computer use was associated with a higher prevalence of overweight in cross-sectional analyses. However, musical instrument playing, commonly considered as a sedentary activity, was associated with a decreased overweight risk among boys. Four patterns of leisure activities were found: ‘Active and sociable’, ‘Active but less sociable’, ‘Passive but sociable’, and ‘Passive and solitary’. The prevalence of overweight was generally highest among the ‘Passive and solitary’ adolescents. Overall, leisure activity patterns did not predict overweight risk later in adolescence. An exception were 14-year-old ‘Passive and solitary’ girls who had the greatest risk of becoming overweight by 17 years of age. Heritability of BMI was high (0.58-0.83). Common environmental factors shared by family-members affected the BMI at 11-12 and 14 years but their effect had disappeared by 17 years of age. Additive genetic factors explained 90-96% of the BMI stability across adolescence. Genetic correlations across adolescence were high, which suggests similar genetic effects on BMI throughout adolescence, while unique environmental effects on BMI appeared to vary. These findings suggest that family-based interventions hold promise for obesity prevention into early and middle adolescence, but that later in adolescence obesity prevention should focus on individuals. A useful target could be adolescents' leisure time, and our findings highlight the importance of versatility in leisure activities.