54 resultados para feedlot receiving
Resumo:
Most countries of Europe, as well as many countries in other parts of the world, are experiencing an increased impact of natural hazards. It is often speculated, but not yet proven, that climate change might influence the frequency and magnitude of certain hydro-meteorological natural hazards. What has certainly been observed is a sharp increase in financial losses caused by natural hazards worldwide. Eventhough Europe appears to be a space that is not affected by natural hazards to such catastrophic extents as other parts of the world are, the damages experienced here are certainly increasing too. Natural hazards, climate change and, in particular, risks have therefore recently been put high on the political agenda of the EU. In the search for appropriate instruments for mitigating impacts of natural hazards and climate change, as well as risks, the integration of these factors into spatial planning practices is constantly receiving higher attention. The focus of most approaches lies on single hazards and climate change mitigation strategies. The current paradigm shift of climate change mitigation to adaptation is used as a basis to draw conclusions and recommendations on what concepts could be further incorporated into spatial planning practices. Especially multi-hazard approaches are discussed as an important approach that should be developed further. One focal point is the definition and applicability of the terms natural hazard, vulnerability and risk in spatial planning practices. Especially vulnerability and risk concepts are so many-fold and complicated that their application in spatial planning has to be analysed most carefully. The PhD thesis is based on six published articles that describe the results of European research projects, which have elaborated strategies and tools for integrated communication and assessment practices on natural hazards and climate change impacts. The papers describe approaches on local, regional and European level, both from theoretical and practical perspectives. Based on these, passed, current and future potential spatial planning applications are reviewed and discussed. In conclusion it is recommended to shift from single hazard assessments to multi-hazard approaches, integrating potential climate change impacts. Vulnerability concepts should play a stronger role than present, and adaptation to natural hazards and climate change should be more emphasized in relation to mitigation. It is outlined that the integration of risk concepts in planning is rather complicated and would need very careful assessment to ensure applicability. Future spatial planning practices should also consider to be more interdisciplinary, i.e. to integrate as many stakeholders and experts as possible to ensure the sustainability of investments.
Resumo:
Of water or the Spirit? Uuras Saarnivaara s theology of baptism The aim of the study was to investigate PhD and ThD Uuras Saarnivaara s views on baptism as well as their possible changes and the reasons for them. Dr Saarnivaara said himself that he searched for the truth about the relationship between baptism and faith for decades, and had faltered in his views. The method of this research is systematic analysis. A close study of the source material shows that Dr Saarnivaara s views on baptism have most likely changed several times. Therefore, special attention was paid to the time periods defined by when his literary works were published. This resulted in revealing the different perspectives he had on baptism. The fact that Dr Saarnivaara worked on two continents Europe and North America added a challenge to the research process. At the beginning of the research, I described Dr Saarnivaara s phases of life and mapped out his vast literary production as well as presented his theological basis. Saarnivaara s theological view on the means of grace and their interrelation in the church was influenced by the Laestadian movement, which caused him to adopt the view that the Holy Spirit does not dwell in the means of grace, but in the believers. Thus the real presence of Christ in the means of grace is denied. God s word is divided into Biblical revelation and proclamation by believers through the means of grace. Also, the sacraments are overshadowed by the preached word. Because grace is received through the word of the gospel preached publicly or privately by a believer, the preacher s status gains importance at the expense of the actual means of grace. Saarnivaara was intrigued by the content of baptism from the time he was a student until the end of his life. As a young theologian, he would adopt the opinions of his teachers as well as the view of the Evangelical Lutheran Church of Finland, which at the time was dominated by the pietistic movement and the teachings of J. T. Beck. After Saarnivaara had converted to the Laestadian movement, moved to the United States and started his Luther research, he adopted a view on baptism which was to a great extent in accordance with Luther and the Lutheran Symbolical Books. Saarnivaara considered his former views on baptism unbiblical and publicly apologised for them. In the 1950s, after starting his ministry within the Finnish neopietistic movements, Saarnivaara adopted a Laestadian-neopietistic doctrine of baptism. During his Beckian-pietistic era, Saarnivaara based his baptism theology on the event of the disciples of Jesus being baptised by John the Baptist, the revival of Samaria in the Book of Acts and the conversion of Cornelius and his family, all cases where the receiving of the Holy Spirit and the baptism were two separate events in time. In order to defend the theological unity of the Bible, Saarnivaara had to interpret Jesus teachings on baptism in the Gospels and the teachings of the Apostles in the New Testament letters from a viewpoint based on the three events mentioned above. During his Beckian-pietistic era, the abovementioned basic hermeneutic choice caused Saarnivaara to separate baptism by water and baptism by the Holy Spirit in his salvation theology. Simultaneously, the faith of a small child is denied, and rebirth is divided into two parts, the objective and the subjective, the latter being moved from the moment of baptism to a possible spiritual break-through at an age when the person possesses a more mature understanding. During his Laestadian-Lutheran era, Saarnivaara s theology of baptism was biblically consistent and the same for all people regardless of the person s age. Small children receive faith in baptism through the presence of Christ. The task of other people s faith is limited to the act of bringing the child to the baptism so that the child may receive his/her own faith from Christ and be born again as a child of God. The doctrine of baptism during Saarnivaara s Laestadian-neopietistic era represents in many aspects the emphases he presented during his first era, although they were now partly more radical. Baptism offers grace; it is not a means of grace. Justification, rebirth and salvation would take place later on when a person had reached an age with a more mature understanding through the word of God. A small child cannot be born again in baptism because being born again requires personal faith, which is received through hearing and understanding the law and the gospel. Saarnivaara s views on baptism during his first and third era are, unlike during his second era, quite controversial. The question of the salvation of a small child goes unanswered, or it is even denied. The central question during both eras is the demand of conversion and personal faith at a mature age. The background for this demand is in Saarnivaara s anthropology, which accentuates man s relationship to God as an intellectual and mental matter requiring understanding, and which needs no material instruments. The two first theological eras regarding Saarnivaara s doctrine of baptism lasted around ten years. The third era lasted over 40 years until his death.
Resumo:
The availability of oxygen has a major effect on all organisms. The yeast Saccharomyces cerevisiae is able to adapt its metabolism for growth in different conditions of oxygen provision, and to grow even under complete lack of oxygen. Although the physiology of S. cerevisiae has mainly been studied under fully aerobic and anaerobic conditions, less is known of metabolism under oxygen-limited conditions and of the adaptation to changing conditions of oxygen provision. This study compared the physiology of S. cerevisiae in conditions of five levels of oxygen provision (0, 0.5, 1.0, 2.8 and 20.9% O2 in feed gas) by using measurements on metabolite, transcriptome and proteome levels. On the transcriptional level, the main differences were observed between the three level groups, 0, 0.5 2.8 and 20.9% O2 which led to fully fermentative, respiro-fermentative and fully respiratory modes of metabolism, respectively. However, proteome analysis suggested post-transcriptional regulation at the level of 0.5 O2. The analysis of metabolite and transcript levels of central carbon metabolism also suggested post-transcriptional regulation especially in glycolysis. Further, a global upregulation of genes related to respiratory pathways was observed in the oxygen-limited conditions and the same trend was seen in the proteome analysis and in the activities of enzymes of the TCA cycle. The responses of intracellular metabolites related to central carbon metabolism and transcriptional responses to change in oxygen availability were studied. As a response to sudden oxygen depletion, concentrations of the metabolites of central carbon metabolism responded faster than the corresponding levels of gene expression. In general, the genome-wide transcriptional responses to oxygen depletion were highly similar when two different initial conditions of oxygen provision (20.9 and 1.0% O2) were compared. The genes related to growth and cell proliferation were transiently downregulated whereas the genes related to protein degradation and phosphate uptake were transiently upregulated. In the cultures initially receiving 1.0% O2, a transient upregulation of genes related to fatty acid oxidation, peroxisomal biogenesis, response to oxidative stress and pentose phosphate pathway was observed. Additionally, this work analysed the effect of oxygen on transcription of genes belonging to the hexose transporter gene family. Although the specific glucose uptake rate was highest in fully anaerobic conditions, none of the hxt genes showed highest expression in anaerobic conditions. However, the expression of genes encoding the moderately low affinity transporters decreased with the decreasing oxygen level. Thus it was concluded that there is a relative increase in high affinity transport in anaerobic conditions supporting the high uptake rate.
Resumo:
Interactions among individuals give rise to both cooperation and conflict. Individuals will behave selfishly or altruistically depending on which gives the higher payoff. The reproductive strategies of many animals are flexible and several alternative tactics may be present from which the most suitable one is applied. Generally, alternative reproductive tactics may be defined as a response to competition from individuals of the same sex. These alternative reproductive tactics are means by which individuals may fine-tune their fitness to the reigning circumstances and which are shaped by the environment individuals are occupying as well as by the behaviour of other individuals sharing the environment. By employing such alternative ways of achieving reproductive output, individuals may alleviate competition from others. Conspecific brood parasitism (CBP) is an alternative reproductive strategy found in several egg laying animal groups, and it is especially common among waterfowl. Within this alternative reproductive strategy, four reproductive options can be identified. These four options represent a continuum from low reproductive effort coupled with low fitness returns, to high reproductive effort and consequently high benefits. It may not be evident how individuals should allocate reproductive effort between eggs laid in their own nest vs. in nests of others, however. Limited fecundity will constrain the number of eggs donated by a parasite, but also the tendency for hosts to accept parasitic eggs may affect the allocation decision. Furthermore, kinship, individual quality and the costs of breeding may play a role in complicating the allocation decision. In this thesis, I view the seemingly paradoxical effects of kinship on conflict resolution in the context of alternative reproductive tactics, examining the resulting features of cooperation and conflict. Conspecific brood parasitism sets the stage for investigating these questions. By using both empirical and theoretical approaches, I examine the nature of CBP in a brood parasitic duck, the Barrow's goldeneye (Bucephala islandica). The theoretical chapter of this thesis gives rise to four main conclusions. Firstly, variation in individual quality plays a central role in shaping breeding strategies. Secondly, kinship plays a central role in the evolution of CBP. Thirdly, egg recognition ability may affect the prevalence of parasitism. If egg recognition is perfect, higher relatedness between host and parasite facilitates CBP. Finally, I show that the relative costs of egg laying and post-laying care play a so far underestimated role in determining the prevalence of parasitism. The costs of breeding may outweigh possible inclusive fitness benefits accrued from receiving eggs from relatives. Several of the patterns brought out by the theoretical work are then confirmed empirically in the following chapters. Findings include confirmation of the central role of relatedness in determining the extent of parasitism as well as inducing a counterintuitive host clutch reduction. Furthermore, I demonstrate a cost of CBP inflicted on hosts, as well as results suggesting that host age reflects individual quality, affecting the ability to overcome costs inflicted by CBP. In summary, I demonstrate both theoretically and empirically the presence of cooperation and conflict in the interactions between conspecific parasites and their hosts. The field of CBP research has traditionally been divided, but the first steps have now been taken toward the acceptance of the opposite side of the divide. Especially the theoretical findings of chapter 1 offer the possibility to view seemingly contrasting results of various studies within the same framework, and may direct future research toward more general features underlying differences in the patterns of CBP between populations or species.
Resumo:
With transplant rejection rendered a minor concern and survival rates after liver transplantation (LT) steadily improving, long-term complications are attracting more attention. Current immunosuppressive therapies, together with other factors, are accompanied by considerable long-term toxicity, which clinically manifests as renal dysfunction, high risk for cardiovascular disease, and cancer. This thesis investigates the incidence, causes, and risk factors for such renal dysfunction, cardiovascular risk, and cancer after LT. Long-term effects of LT are further addressed by surveying the quality of life and employment status of LT recipients. The consecutive patients included had undergone LT at Helsinki University Hospital from 1982 onwards. Data regarding renal function – creatinine and estimated glomerular filtration rate (GFR) – were recorded before and repeatedly after LT in 396 patients. The presence of hypertension, dyslipidemia, diabetes, impaired fasting glucose, and overweight/obesity before and 5 years after LT was determined among 77 patients transplanted for acute liver failure. The entire cohort of LT patients (540 patients), including both children and adults, was linked with the Finnish Cancer Registry, and numbers of cancers observed were compared to site-specific expected numbers based on national cancer incidence rates stratified by age, gender, and calendar time. Health-related quality of life (HRQoL), measured by the 15D instrument, and employment status were surveyed among all adult patients alive in 2007 (401 patients). The response rate was 89%. Posttransplant cardiovascular risk factor prevalence and HRQoL were compared with that in the age- and gender-matched Finnish general population. The cumulative risk for chronic kidney disease increased from 10% at 5 years to 16% at 10 years following LT. GFR up to 10 years after LT could be predicted by the GFR at 1 year. In patients transplanted for chronic liver disease, a moderate correlation of pretransplant GFR with later GFR was also evident, whereas in acute liver failure patients after LT, even severe pretransplant renal dysfunction often recovered. By 5 years after LT, 71% of acute liver failure patients were receiving antihypertensive medications, 61% were exhibiting dyslipidemia, 10% were diabetic, 32% were overweight, and 13% obese. Compared with the general population, only hypertension displayed a significantly elevated prevalence among patients – 2.7-fold – whereas patients exhibited 30% less dyslipidemia and 71% less impaired fasting glucose. The cumulative incidence of cancer was 5% at 5 years and 13% at 10. Compared with the general population, patients were subject to a 2.6-fold cancer risk, with non-melanoma skin cancer (standardized incidence ratio, SIR, 38.5) and non-Hodgkin lymphoma (SIR 13.9) being the predominant malignancies. Non-Hodgkin lymphoma was associated with male gender, young age, and the immediate posttransplant period, whereas old age and antibody induction therapy raised skin-cancer risk. HRQoL deviated clinically unimportantly from the values in the general population, but significant deficits among patients were evident in some physical domains. HRQoL did not seem to decrease with longer follow-up. Although 87% of patients reported improved working capacity, data on return to working life showed marked age-dependency: Among patients aged less than 40 at LT, 70 to 80% returned to work, among those aged 40 to 50, 55%, and among those above 50, 15% to 28%. The most common cause for unemployment was early retirement before LT. Those patients employed exhibited better HRQoL than those unemployed. In conclusion, although renal impairment, hypertension, and cancer are evidently common after LT and increase with time, patients’ quality of life remains comparable with that of the general population.
Resumo:
Vasomotor hot flushes are complained of by approximately 75% of postmenopausal women, but their frequency and severity show great individual variation. Hot flushes have been present in women attending observational studies showing cardiovascular benefit associated with hormone therapy use, whereas they have been absent or very mild in randomized hormone therapy trials showing cardiovascular harm. Therefore, if hot flushes are a factor connected with vascular health, they could perhaps be one explanation for the divergence of cardiovascular data in observational versus randomized studies. For the present study 150 healthy, recently postmenopausal women showing a large variation in hot flushes were studied in regard to cardiovascular health by way of pulse wave analysis, ambulatory blood pressure and several biochemical vascular markers. In addition, the possible impact of hot flushes on outcomes of hormone therapy was studied. This study shows that women with severe hot flushes exhibit a greater vasodilatory reactivity as assessed by pulse wave analysis than do women without vasomotor symptoms. This can be seen as a hot flush-related vascular benefit. Although severe night-time hot flushes seem to be accompanied by transient increases in blood pressure and heart rate, the diurnal blood pressure and heart rate profiles show no significant differences between women without and with mild, moderate or severe hot flushes. The levels of vascular markers, such as lipids, lipoproteins, C-reactive protein and sex hormone-binding globulin show no association with hot flush status. In the 6-month hormone therapy trial the women were classified as having either tolerable or intolerable hot flushes. These groups were treated in a randomized order with transdermal estradiol gel, oral estradiol alone or in combination with medroxyprogesterone acetate, or with placebo. In women with only tolerable hot flushes, oral estradiol leads to a reduced vasodilatory response and increases in 24-hour and daytime blood pressures as compared to women with intolerable hot flushes receiving the same therapy. No such effects were observed with the other treatment regimes or in women with intolerable hot flushes. The responses of vascular biomarkers to hormone therapy are unaffected by hot flush status. In conclusion, hot flush status contributes to cardiovascular health before and during hormone therapy. Severe hot flushes are associated with an increased vasodilatory, and thus, a beneficial vascular status. Oral estradiol leads to vasoconstrictive changes and increases in blood pressure, and thus to possible vascular harm, but only in women whose hot flushes are so mild that they would probably not lead to the initiation of hormone therapy in clinical practice. Healthy, recently postmenopausal women with moderate to severe hot flushes should be given the opportunity to use hormone therapy alleviate hot flushes, and if estrogen is prescribed for indications other than for the control of hot flushes, transdermal route of administration should be favored.
Resumo:
This study is one part of a collaborative depression research project, the Vantaa Depression Study (VDS), involving the Department of Mental and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry of the Peijas Medical Care District (PMCD), Vantaa, Finland. The VDS includes two parts, a record-based study consisting of 803 patients, and a prospective, naturalistic cohort study of 269 patients. Both studies include secondary-level care psychiatric out- and inpatients with a new episode of major depressive disorder (MDD). Data for the record-based part of the study came from a computerised patient database incorporating all outpatient visits as well as treatment periods at the inpatient unit. We included all patients aged 20 to 59 years old who had been assigned a clinical diagnosis of depressive episode or recurrent depressive disorder according to the International Classification of Diseases, 10th edition (ICD-10) criteria and who had at least one outpatient visit or day as an inpatient in the PMCD during the study period January 1, 1996, to December 31, 1996. All those with an earlier diagnosis of schizophrenia, other non-affective psychosis, or bipolar disorder were excluded. Patients treated in the somatic departments of Peijas Hospital and those who had consulted but not received treatment from the psychiatric consultation services were excluded. The study sample comprised 290 male and 513 female patients. All their psychiatric records were reviewed and each patient completed a structured form with 57 items. The treatment provided was reviewed up to the end of the depression episode or to the end of 1997. Most (84%) of the patients received antidepressants, including a minority (11%) on treatment with clearly subtherapeutic low doses. During the treatment period the depressed patients investigated averaged only a few visits to psychiatrists (median two visits), but more to other health professionals (median seven). One-fifth of both genders were inpatients, with a mean of nearly two inpatient treatment periods during the overall treatment period investigated. The median length of a hospital stay was 2 weeks. Use of antidepressants was quite conservative: The first antidepressant had been switched to another compound in only about one-fifth (22%) of patients, and only two patients had received up to five antidepressant trials. Only 7% of those prescribed any antidepressant received two antidepressants simultaneously. None of the patients was prescribed any other augmentation medication. Refusing antidepressant treatment was the most common explanation for receiving no antidepressants. During the treatment period, 19% of those not already receiving a disability pension were granted one due to psychiatric illness. These patients were nearly nine years older than those not pensioned. They were also more severely ill, made significantly more visits to professionals and received significantly more concomitant medications (hypnotics, anxiolytics, and neuroleptics) than did those receiving no pension. In the prospective part of the VDS, 806 adult patients were screened (aged 20-59 years) in the PMCD for a possible new episode of DSM-IV MDD. Of these, 542 patients were interviewed face-to-face with the WHO Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.0. Exclusion criteria were the same as in the record-based part of the VDS. Of these, 542 269 patients fulfiled the criteria of DSM-IV MDE. This study investigated factors associated with patients' functional disability, social adjustment, and work disability (being on sick-leave or being granted a disability pension). In the beginning of the treatment the most important single factor associated with overall social and functional disability was found to be severity of depression, but older age and personality disorders also significantly contributed. Total duration and severity of depression, phobic disorders, alcoholism, and personality disorders all independently contributed to poor social adjustment. Of those who were employed, almost half (43%) were on sick-leave. Besides severity and number of episodes of depression, female gender and age over 50 years strongly and independently predicted being on sick-leave. Factors influencing social and occupational disability and social adjustment among patients with MDD were studied prospectively during an 18-month follow-up period. Patients' functional disability and social adjustment were alleviated during the follow-up concurrently with recovery from depression. The current level of functioning and social adjustment of a patient with depression was predicted by severity of depression, recurrence before baseline and during follow-up, lack of full remission, and time spent depressed. Comorbid psychiatric disorders, personality traits (neuroticism), and perceived social support also had a significant influence. During the 18-month follow-up period, of the 269, 13 (5%) patients switched to bipolar disorder, and 58 (20%) dropped out. Of the 198, 186 (94%) patients were at baseline not pensioned, and they were investigated. Of them, 21 were granted a disability pension during the follow-up. Those who received a pension were significantly older, more seldom had vocational education, and were more often on sick-leave than those not pensioned, but did not differ with regard to any other sociodemographic or clinical factors. Patients with MDD received mostly adequate antidepressant treatment, but problems existed in treatment intensity and monitoring. It is challenging to find those at greatest risk for disability and to provide them adequate and efficacious treatment. This includes great challenges to the whole society to provide sufficient resources.
Resumo:
Background: The fecal neutrophil-derived proteins calprotectin and lactoferrin have proven useful surrogate markers of intestinal inflammation. The aim of this study was to compare fecal calprotectin and lactoferrin concentrations to clinically, endoscopically, and histologically assessed Crohn’s disease (CD) activity, and to explore the suitability of these proteins as surrogate markers of mucosal healing during anti-TNFα therapy. Furthermore, we studied changes in the number and expression of effector and regulatory T cells in bowel biopsy specimens during anti-TNFα therapy. Patients and methods: Adult CD patients referred for ileocolonoscopy (n=106 for 77 patients) for various reasons were recruited (Study I). Clinical disease activity was assessed with the Crohn’s disease activity index (CDAI) and endoscopic activity with both the Crohn’s disease index of severity (CDEIS) and the simple endoscopic score for Crohn’s disease (SES-CD). Stool samples for measurements of calprotectin and lactoferrin, and blood samples for CRP were collected. For Study II, biopsy specimens were obtained from the ileum and the colon for histologic activity scoring. In prospective Study III, after baseline ileocolonoscopy, 15 patients received induction with anti-TNFα blocking agents and endoscopic, histologic, and fecal-marker responses to therapy were evaluated at 12 weeks. For detecting changes in the number and expression of effector and regulatory T cells, biopsy specimens were taken from the most severely diseased lesions in the ileum and the colon (Study IV). Results: Endoscopic scores correlated significantly with fecal calprotectin and lactoferrin (p<0.001). Both fecal markers were significantly lower in patients with endoscopically inactive than with active disease (p<0.001). In detecting endoscopically active disease, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for calprotectin ≥200 μg/g were 70%, 92%, 94%, and 61%; for lactoferrin ≥10 μg/g they were 66%, 92%, 94%, and 59%. Accordingly, the sensitivity, specificity, PPV, and NPV for CRP >5 mg/l were 48%, 91%, 91%, and 48%. Fecal markers were significantly higher in active colonic (both p<0.001) or ileocolonic (calprotectin p=0.028, lactoferrin p=0.004) than in ileal disease. In ileocolonic or colonic disease, colon histology score correlated significantly with fecal calprotectin (r=0.563) and lactoferrin (r=0.543). In patients receiving anti-TNFα therapy, median fecal calprotectin decreased from 1173 μg/g (range 88-15326) to 130 μg/g (13-1419) and lactoferrin from 105.0 μg/g (4.2-1258.9) to 2.7 μg/g (0.0-228.5), both p=0.001. The relation of ileal IL-17+ cells to CD4+ cells decreased significantly during anti-TNF treatment (p=0.047). The relation of IL-17+ cells to Foxp3+ cells was higher in the patients’ baseline specimens than in their post-treatment specimens (p=0.038). Conclusions: For evaluation of CD activity, based on endoscopic findings, more sensitive surrogate markers than CDAI and CRP were fecal calprotectin and lactoferrin. Fecal calprotectin and lactoferrin were significantly higher in endoscopically active disease than in endoscopic remission. In both ileocolonic and colonic disease, fecal markers correlated closely with histologic disease activity. In CD, these neutrophil-derived proteins thus seem to be useful surrogate markers of endoscopic activity. During anti-TNFα therapy, fecal calprotectin and lactoferrin decreased significantly. The anti-TNFα treatment was also reflected in a decreased IL-17/Foxp3 cell ratio, which may indicate improved balance between effector and regulatory T cells with treatment.
Resumo:
Diffuse large B-cell lymphoma (DLBCL) is the most common of the non-Hodgkin lymphomas. As DLBCL is characterized by heterogeneous clinical and biological features, its prognosis varies. To date, the International Prognostic Index has been the strongest predictor of outcome for DLBCL patients. However, no biological characters of the disease are taken into account. Gene expression profiling studies have identified two major cell-of-origin phenotypes in DLBCL with different prognoses, the favourable germinal centre B-cell-like (GCB) and the unfavourable activated B-cell-like (ABC) phenotypes. However, results of the prognostic impact of the immunohistochemically defined GCB and non-GCB distinction are controversial. Furthermore, since the addition of the CD20 antibody rituximab to chemotherapy has been established as the standard treatment of DLBCL, all molecular markers need to be evaluated in the post-rituximab era. In this study, we aimed to evaluate the predictive value of immunohistochemically defined cell-of-origin classification in DLBCL patients. The GCB and non-GCB phenotypes were defined according to the Hans algorithm (CD10, BCL6 and MUM1/IRF4) among 90 immunochemotherapy- and 104 chemotherapy-treated DLBCL patients. In the chemotherapy group, we observed a significant difference in survival between GCB and non-GCB patients, with a good and a poor prognosis, respectively. However, in the rituximab group, no prognostic value of the GCB phenotype was observed. Likewise, among 29 high-risk de novo DLBCL patients receiving high-dose chemotherapy and autologous stem cell transplantation, the survival of non-GCB patients was improved, but no difference in outcome was seen between GCB and non-GCB subgroups. Since the results suggested that the Hans algorithm was not applicable in immunochemotherapy-treated DLBCL patients, we aimed to further focus on algorithms based on ABC markers. We examined the modified activated B-cell-like algorithm based (MUM1/IRF4 and FOXP1), as well as a previously reported Muris algorithm (BCL2, CD10 and MUM1/IRF4) among 88 DLBCL patients uniformly treated with immunochemotherapy. Both algorithms distinguished the unfavourable ABC-like subgroup with a significantly inferior failure-free survival relative to the GCB-like DLBCL patients. Similarly, the results of the individual predictive molecular markers transcription factor FOXP1 and anti-apoptotic protein BCL2 have been inconsistent and should be assessed in immunochemotherapy-treated DLBCL patients. The markers were evaluated in a cohort of 117 patients treated with rituximab and chemotherapy. FOXP1 expression could not distinguish between patients, with favourable and those with poor outcomes. In contrast, BCL2-negative DLBCL patients had significantly superior survival relative to BCL2-positive patients. Our results indicate that the immunohistochemically defined cell-of-origin classification in DLBCL has a prognostic impact in the immunochemotherapy era, when the identifying algorithms are based on ABC-associated markers. We also propose that BCL2 negativity is predictive of a favourable outcome. Further investigational efforts are, however, warranted to identify the molecular features of DLBCL that could enable individualized cancer therapy in routine patient care.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
Intensive care is to be provided to patients benefiting from it, in an ethical, efficient, effective and cost-effective manner. This implies a long-term qualitative and quantitative analysis of intensive care procedures and related resources. The study population consists of 2709 patients treated in the general intensive care unit (ICU) of Helsinki University Hospital. Study sectors investigate intensive care patients mortality, quality of life (QOL), Quality-Adjusted Life-Years (QALY units) and factors related to severity of illness, length of stay (LOS), patient s age, evaluation period as well as experiences and memories connected with the ICU episode. In addition, the study examines the qualities of two QOL measures, the RAND 36 Item Health Survey 1.0 (RAND-36) and the 5 Item EuroQol-5D (EQ-5D) and assesses the correlation of the test results. Patients treated in 1995 responded to the RAND-36 questionnaire in 1996. All patients, treated from 1995-2000, received a QOL questionnaires in 2001, when 1 7 years had lapsed from the intensive treatment. Response rate was 79.5 %. Main Results 1) Of the patients who died within the first year (n = 1047) 66 % died during the intensive care period or within the following month. The non-survivors were more aged than the surviving patients, had generally a higher than average APACHE II and SOFA score depicting the severity of illness, their ICU LOS was longer and hospital stay shorter than of the surviving patients (p < 0.001). Mortality of patients receiving conservative treatment was higher than of those receiving surgical treatment. Patients replying to the QOL survey in 2001 (n = 1099) had recovered well: 97 % of those lived at home. More than half considered their QOL as good or extremely good, 40 % as satisfactory and 7 % as bad. All QOL indexes of those of working-age were considerably lower (p < 0.001) than comparable figures of the age- and gender-adjusted Finnish population. The 5-year monitoring period made evident that mental recovery was slower than physical recovery. 2) The results of RAND-36 and EQ-5D correlated well (p < 0.01). The RAND-36 profile measure distinguished more clearly between the different categories of QOL and their levels. EQ-5D measured well the patient groups general QOL and the sum index was used to calculate QALY units. 3) QALY units were calculated by multiplying the time the patient survived after ICU stay or expected life-years by the EQ-5D sum index. Aging automatically lowers the number of QALY units. Patients under the age of 65 receiving conservative treatment benefited from treatment to a greater extent measured in QALY units than their peers receiving surgical treatment, but in the age group 65 and over patients with surgical treatment received higher QALY ratings than recipients of conservative treatment. 4) The intensive care experience and QOL ratings were connected. The QOL indices were statistically highest for those recipients with memories of intensive care as a positive experience, albeit their illness requiring intensive care treatment was less serious than average. No statistically significant differences were found in the QOL indices of those with negative memories, no memories or those who did not express the quality of their experiences.
Resumo:
Some perioperative clinical factors related to the primary cemented arthroplasty operation for osteoarthritis of the hip or knee joint are studied and discussed in this thesis. In a randomized, double-blind study, 39 patients were divided into two groups: one receiving tranexamic acid and the other not receiving it. Tranexamic acid was given in a dose of 10 mg/kg before the operation and twice thereafter, at 8-hour intervals. Total blood loss was smaller in the tranexamic acid group than in the control group. No thromboembolic complications were noticed. In a prospective, randomized study, 58 patients with hip arthroplasty and 39 patients with knee arthroplasty were divided into groups with postoperative closed-suction drainage and without drainage. There was no difference in healing of the wounds, postoperative blood transfusions, complications or range of motion. As a result of this study, the use of drains is no longer recommended. In a randomised study the effectiveness of a femoral nerve block (25 patients) was compared with other methods of pain control (24 patients) on the first postoperative day after total knee arthroplasty. The femoral block consisted of a single injection administered at patients´ bedside during the surgeon´s hospital rounds. Femoral block patients reported less pain and required half of the amount of oxycodone. Additional femoral block or continued epidural analgesia was required more frequently by the control group patients. Pain management with femoral blocks resulted in less work for nursing staff. In a retrospective study of 422 total hip and knee arthroplasty cases the C-reactive protein levels and clinical course were examined. After hip and knee arthroplasty the maximal C-reactive protein values are seen on the second and third postoperative days, after which the level decreases rapidly. There is no difference between patients with cemented or uncemented prostheses. Major postoperative complications may cause a further increase in C-reactive protein levels at one and two weeks. In-hospital and outpatient postoperative control radiographs of 200 hip and knee arthroplasties were reviewed retrospectively. If postoperative radiographs are of good quality, there seems to be no need for early repetitive radiographs. The quality and safety of follow-up is not compromised by limiting follow-up radiographs to those with clinical indications. Exposure of the patients and the staff to radiation is reduced. Reading of the radiographs by only the treating orthopaedic surgeon is enough. These factors may seem separate from each other, but linking them together may help the treating orthopaedic surgeon to adequate patient care strategy. Notable savings can be achieved.
Resumo:
The Vantaa Primary Care Depression Study (PC-VDS) is a naturalistic and prospective cohort study concerning primary care patients with depressive disorders. It forms a collaborative research project between the Department of Mental and Alcohol Research of the National Public Health Institute, and the Primary Health Care Organization of the City of Vantaa. The aim is to obtain a comprehensive view on clinically significant depression in primary care, and to compare depressive patients in primary care and in secondary level psychiatric care in terms of clinical characteristics. Consecutive patients (N=1111) in three primary care health centres were screened for depression with the PRIME-MD, and positive cases interviewed by telephone. Cases with current depressive symptoms were diagnosed face-to-face with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I/P). A cohort of 137 patients with unipolar depressive disorders, comprising all patients with at least two depressive symptoms and clinically significant distress or disability, was recruited. The Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II), medical records, rating scales, interview and a retrospective life-chart were used to obtain comprehensive cross-sectional and retrospective longitudinal information. For investigation of suicidal behaviour the Scale for Suicidal Ideation (SSI), patient records and the interview were used. The methodology was designed to be comparable to The Vantaa Depression Study (VDS) conducted in secondary level psychiatric care. Comparison of major depressive disorder (MDD) patients aged 20-59 from primary care in PC-VDS (N=79) was conducted with new psychiatric outpatients (N =223) and inpatients (N =46) in VDS. The PC-VDS cohort was prospectively followed up at 3, 6 and 18 months. Altogether 123 patients (90%) completed the follow-up. Duration of the index episode and the timing of relapses or recurrences were examined using a life-chart. The retrospective investigation revealed current MDD in most (66%), and lifetime MDD in nearly all (90%) cases of clinically significant depressive syndromes. Two thirds of the “subsyndromal” cases had a history of major depressive episode (MDE), although they were currently either in partial remission or a potential prodromal phase. Recurrences and chronicity were common. The picture of depression was complicated by Axis I co-morbidity in 59%, Axis II in 52% and chronic Axis III disorders in 47%; only 12% had no co-morbidity. Within their lifetimes, one third (37%) had seriously considered suicide, and one sixth (17%) had attempted it. Suicidal behaviour clustered in patients with moderate to severe MDD, co-morbidity with personality disorders, and a history of treatment in psychiatric care. The majority had received treatment for depression, but suicidal ideation had mostly remained unrecognised. The comparison of patients with MDD in primary care to those in psychiatric care revealed that the majority of suicidal or psychotic patients were receiving psychiatric treatment, and the patients with the most severe symptoms and functional limitations were hospitalized. In other clinical aspects, patients with MDD in primary care were surprisingly similar to psychiatric outpatients. Mental health contacts earlier in the current MDE were common among primary care patients. The 18-month prospective investigation with a life-chart methodology verified the chronic and recurrent nature of depression in primary care. Only one-quarter of patients with MDD achieved and maintained full remission during the follow-up, while another quarter failed to remit at all. The remaining patients suffered either from residual symptoms or recurrences. While severity of depression was the strongest predictor of recovery, presence of co-morbid substance use disorders, chronic medical illness and cluster C personality disorders all contributed to an adverse outcome. In clinical decision making, beside severity of depression and co-morbidity, history of previous MDD should not be ignored by primary care doctors while depression there is usually severe enough to indicate at least follow-up, and concerning those with residual symptoms, evaluation of their current treatment. Moreover, recognition of suicidal behaviour among depressed patients should also be improved. In order to improve outcome of depression in primary care, the often chronic and recurrent nature of depression should be taken into account in organizing the care. According to literature management programs of a chronic disease, with enhancement of the role of case managers and greater integration of primary and specialist care, have been successful. Optimum ways of allocating resources between treatment providers as well as within health centres should be found.
Resumo:
Technological development of fast multi-sectional, helical computed tomography (CT) scanners has allowed computed tomography perfusion (CTp) and angiography (CTA) in evaluating acute ischemic stroke. This study focuses on new multidetector computed tomography techniques, namely whole-brain and first-pass CT perfusion plus CTA of carotid arteries. Whole-brain CTp data is acquired during slow infusion of contrast material to achieve constant contrast concentration in the cerebral vasculature. From these data quantitative maps are constructed of perfused cerebral blood volume (pCBV). The probability curve of cerebral infarction as a function of normalized pCBV was determined in patients with acute ischemic stroke. Normalized pCBV, expressed as a percentage of contralateral normal brain pCBV, was determined in the infarction core and in regions just inside and outside the boundary between infarcted and noninfarcted brain. Corresponding probabilities of infarction were 0.99, 0.96, and 0.11, R² was 0.73, and differences in perfusion between core and inner and outer bands were highly significant. Thus a probability of infarction curve can help predict the likelihood of infarction as a function of percentage normalized pCBV. First-pass CT perfusion is based on continuous cine imaging over a selected brain area during a bolus injection of contrast. During its first passage, contrast material compartmentalizes in the intravascular space, resulting in transient tissue enhancement. Functional maps such as cerebral blood flow (CBF), and volume (CBV), and mean transit time (MTT) are then constructed. We compared the effects of three different iodine concentrations (300, 350, or 400 mg/mL) on peak enhancement of normal brain tissue and artery and vein, stratified by region-of-interest (ROI) location, in 102 patients within 3 hours of stroke onset. A monotonic increasing peak opacification was evident at all ROI locations, suggesting that CTp evaluation of patients with acute stroke is best performed with the highest available concentration of contrast agent. In another study we investigated whether lesion volumes on CBV, CBF, and MTT maps within 3 hours of stroke onset predict final infarct volume, and whether all these parameters are needed for triage to intravenous recombinant tissue plasminogen activator (IV-rtPA). The effect of IV-rtPA on the affected brain by measuring salvaged tissue volume in patients receiving IV-rtPA and in controls was investigated also. CBV lesion volume did not necessarily represent dead tissue. MTT lesion volume alone can serve to identify the upper size limit of the abnormally perfused brain, and those with IV-rtPA salvaged more brain than did controls. Carotid CTA was compared with carotid DSA in grading of stenosis in patients with stroke symptoms. In CTA, the grade of stenosis was determined by means of axial source and maximum intensity projection (MIP) images as well as a semiautomatic vessel analysis. CTA provides an adequate, less invasive alternative to conventional DSA, although tending to underestimate clinically relevant grades of stenosis.
Resumo:
Juvenile idiopathic arthritis (JIA) is a heterogeneous group of childhood chronic arthritides, associated with chronic uveitis in 20% of cases. For JIA patients responding inadequately to conventional disease-modifying anti-rheumatic drugs (DMARDs), biologic therapies, anti-tumor necrosis factor (anti-TNF) agents are available. In this retrospective multicenter study, 258 JIA-patients refractory to DMARDs and receiving biologic agents during 1999-2007 were included. Prior to initiation of anti-TNFs, growth velocity of 71 patients was delayed in 75% and normal in 25%. Those with delayed growth demonstrated a significant increase in growth velocity after initiation of anti-TNFs. Increase in growth rate was unrelated to pubertal growth spurt. No change was observed in skeletal maturation before and after anti-TNFs. The strongest predictor of change in growth velocity was growth rate prior to anti-TNFs. Change in inflammatory activity remained a significant predictor even after decrease in glucocorticoids was taken into account. In JIA-associated uveitis, impact of two first-line biologic agents, etanercept and infliximab, and second-line or third-line anti-TNF agent, adalimumab, was evaluated. In 108 refractory JIA patients receiving etanercept or infliximab, uveitis occurred in 45 (42%). Uveitis improved in 14 (31%), no change was observed in 14 (31%), and in 17 (38%) uveitis worsened. Uveitis improved more frequently (p=0.047) and frequency of annual uveitis flares was lower (p=0.015) in those on infliximab than in those on etanercept. In 20 patients taking adalimumab, 19 (95%) had previously failed etanercept and/or infliximab. In 7 patients (35%) uveitis improved, in one (5%) worsened, and in 12 (60%) no change occurred. Those with improved uveitis were younger and had shorter disease duration. Serious adverse events (AEs) or side-effects were not observed. Adalimumab was effective also in arthritis. Long-term drug survival (i.e. continuation rate on drug) with etanercept (n=105) vs. infliximab (n=104) was at 24 months 68% vs. 68%, and at 48 months 61% vs. 48% (p=0.194 in log-rank analysis). First-line anti-TNF agent was discontinued either due to inefficacy (etanercept 28% vs. infliximab 20%, p=0.445), AEs (7% vs. 22%, p=0.002), or inactive disease (10% vs. 16%, p=0.068). Females, patients with systemic JIA (sJIA), and those taking infliximab as the first therapy were at higher risk for treatment discontinuation. One-third switched to the second anti-TNF agent, which was discontinued less often than the first. In conclusion, in refractory JIA anti-TNFs induced enhanced growth velocity. Four-year treatment survival was comparable between etanercept and infliximab, and switching from first-line to second-line agent a reasonable therapeutic option. During anti-TNF treatment, one-third with JIA-associated anterior uveitis improved.