79 resultados para serious


Relevância:

10.00% 10.00%

Publicador:

Resumo:

For Independent Finland. The Military Committee 1915–1918 In the course of the First World War, several organizations were founded with the purpose of making Finland independent or, at least, restoring her autonomous status. The Military Committee was the most significant active independence organization in Finland in the First World War, in addition to the activist student movement, i.e., the Jaeger Movement. The Military Committee was an organization founded in 1915 by officers who had attended the Hamina Cadet School, with the goal of creating a national army for a liberation war against the Russian troops. It was believed that the liberation war should succeed only with the help of the German Army. With the situation in society continually tensing up in the autumn 1917, the Military Committee also had to figure on the possibility of a Civil War. The activities of the Military Committee started in the early part of 1915 when they were still small-scale, but they gained significant momentum after the Russian Revolution in March 1917. In January 1918, the Military Committee formed the general staff for the White Army, the Senate’s troops. The independence-related activities of the Hamina cadets in the years of the First World War were more extensive and multifaceted than has been believed heretofore. The work of the Military Committee was divided into preparations for a liberation war in Finland, on one hand, and in Stockholm and Berlin, on the other hand. In Finland, the Military Committee took part in intelligence gathering for Germany and in supporting the recruiting Jaegers, and later in founding the civil guard organization, in solving the law and order authorities issue, and finally in selecting the Commander-in-Chief for the Senate’s troops. The member of the Military Committee, especially Captain Hannes Ignatius of the Cavalry contributed greatly to the drafting of the independence activists’ national action plan in Stockholm in May 1917. This plan preceded the formation of the civil guard organization. The Military Committee’s role in founding the civil guards was initially minor, but in the fall of 1917, the Military Committee started to finance the activities of the civil guards, named several former officers as commanders of the civil guards and finally overtook the entire civil guard movement. In Stockholm and Berlin, the representatives of the Military Committee were in active contact with both the high command of the German Army and with the representatives of the Swedish Army. Colonel Nikolai Mexmontan, who was a representative of the Military Committee, collaborated with Swedish officers and Jaeger officers in Stockholm in coming up with comprehensive and detailed plans for starting the Liberation War. Under Mexmontan’s leadership, there were serious negotiations to enter into a confederation with Germany. Lieutenant Colonel Wilhelm Thesleff, on the other hand, became the commander of the Jaeger Battalion 27. The influence and importance of the Military Committee came to the forefront in independent and conflict-torn Finland. The Military Committee became a Senate committee on the 7th of January 1918, with its chairman, for all practical purposes, as the Commander-in-Chief in an eventual war. Lieutenant General Claes Charpentier was the chairman of the Military Committee from mid-December 1917 onwards, but on the 15th of January 1918 he had to resign in favour of Lieutenant General Gustaf Mannerheim. Soon after that, Mannerheim got an order from the chairman of the Senate P. E. Svinhufvud to organize and assume the leadership of the law and order authorities. The chairman of the Military Committee became the Commander-in-Chief of the Senate troops in January 1918, and the Military Committee became the Commander-in-Chief’s general staff. The Military Committee had turned from a clandestine organization into the first general staff of the independent Finnish Army.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the Chinese government began implementing economic reforms in the late 1970s, China has experienced profound economic change and growth. Like other parts of China, Tibetan areas of China have also experienced wide-ranging economic change with growth even higher than the China-wide average in certain years. Though China s strategic policy of developing the West provided many opportunities for economic and business activities, Tibetans have proven poorly equipped to respond to and take advantage of these opportunities. This study is about people, about market participation and specifically about why Tibetans do not effectively participate in the market in the context of China s economic development process. Many political, social, cultural and environmental factors explain the difficulties met by Tibetan communities. However, this study focuses on three factors: the social and culture context, government policy and education. The Buddhistic nature of Tibetan communities, particularly the political and economic system in traditional Tibetan society, explains this, especially after implementation of new national economic policies. An inclusive economic development policy that promotes local people s participation in the market demands serious consideration of local conditions. Unfortunately, such considerations often ignore local Tibetan realities. The economic development policy in Tibetan areas in China is nearly always an attempt to replicate the inland model and open up markets, even though economic and sociopolitical conditions in Tibet are markedly unlike much of China. A consequence of these policies is increasing numbers of non-Tibetan migrants flowing into Tibetan areas with the ensuing marginalization of Tibetans in the marketplace. Poor quality education is another factor contributing to Tibetan inability to effectively participate in the market. Vocational and business education targeting Tibetans is of very low quality and reflective of government failing to consider local circumstances when implementing education policy. The relatively few Tibetans who do receive education are nearly always unable to compete with non-Tibetan migrants in commercial activity. Encouraging and promoting Tibetan participation in business development and access to quality education are crucial for a sustainable and prosperous society in the long term. Particularly, a localized development policy that considers local environmental conditions and production as well as local culture is crucial. Tibet s economic development should be based on local environmental and production conditions, while utilizing Tibetan culture for the benefit of creating a sustainable economy. Such a localized approach best promotes Tibetan market participation. Keywords: Tibet cultural policy education market participation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the 21st century, human-induced global climate change has been highlighted as one of the most serious threats to ecosystems worldwide. According to global climate scenarios, the mean temperature in Finland is expected to increase by 1.8 4.0°C by the end of the century. The regional and seasonal change in temperature has predicted to be spatially and temporally asymmetric, where the High-Arctic and Antarctic areas and winter and spring seasons have been projected to face the highest temperature increase. To understand how species respond to the ongoing climate change, we need to study how climate affects species in different phases of their life cycle. The impact of climate on breeding and migration of eight large-sized bird species was studied in this thesis, taking food availability into account. The findings show that climatic variables have considerable impact on the life-history traits of large-sized birds in northern Europe. The magnitude of climatic effects on migration and breeding was comparable with that of food supply, conventionally regarded as the main factor affecting these life-history traits. Based on the results of this thesis and the current climate scenarios, the following not mutually exclusive responses are possible in the near future. Firstly, asymmetric climate change may result in a mistiming of breeding because mild winters and early spring may lead to earlier breeding, whereas offspring are hatching into colder conditions which elevate mortality. Secondly, climate induced responses can differ between species with different breeding tactics (income vs. capital breeding), so that especially capital breeders can gain advantage on global warming as they can sustain higher energy resources. Thirdly, increasing precipitation has the potential to reduce the breeding success of many species by exposing nestlings to more severe post-hatching conditions and hampering the hunting conditions of parents. Fourthly, decreasing ice cover and earlier ice-break in the Baltic Sea will allow earlier spring migration in waterfowl. In eiders, this can potentially lead to more productive breeding. Fifthly, warming temperatures can favour parents preparing for breeding and increase nestling survival. Lastly, the climate-induced phenological changes in life history events will likely continue. Furthermore, interactions between climate and food resources can be complex and interact with each other. Eiders provide an illustrative example of this complexity, being caught in the crossfire between more benign ice conditions and lower salinity negatively affecting their prime food resource. The general conclusion is that climate is controlling not only the phenology of the species but also their reproductive output, thus affecting the entire population dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wild salmon stocks in the northern Baltic rivers became endangered in the second half of the 20th century, mainly due to recruitment overfishing. As a result, supplementary stocking was widely practised, and supplementation of the Tornionjoki salmon stock took place over a 25 year period until 2002. The stock has been closely monitored by electrofishing, smolt trapping, mark-recapture studies, catch samples and catch surveys. Background information on hatchery-reared stocked juveniles was also collected for this study. Bayesian statistics was applied to the data as this method offers the possibility of bringing prior information into the analysis and an advanced ability for incorporating uncertainty, and also provides probabilities for a multitude of hypotheses. Substantial divergences between reared and wild Tornionjoki salmon were identified in both demographic and phenological characteristics. The divergences tended to be larger the longer the duration spent in hatchery and the more favourable the hatchery conditions were for fast growth. Differences in environment likely induced most of the divergences, but selection of brood fish might have resulted in genotypic divergence in maturation age of reared salmon. Survival of stocked 1-year old juveniles to smolt varied from about 10% to about 25%. Stocking on the lower reach of the river seemed to decrease survival, and the negative effect of stocking volume on survival raises the concern of possible similar effects on the extant wild population. Post-smolt survival of wild Tornionjoki smolts was on average two times higher than that of smolts stocked as parr and 2.5 times higher than that of stocked smolts. Smolts of different groups showed synchronous variation and similar long-term survival trends. Both groups of reared salmon were more vulnerable to offshore driftnet and coastal trapnet fishing than wild salmon. Average survival from smolt to spawners of wild salmon was 2.8 times higher than that of salmon stocked as parr and 3.3 times higher than that of salmon stocked as smolts. Wild salmon and salmon stocked as parr were found to have similar lifetime survival rates, while stocked smolts have a lifetime survival rate over 4 times higher than the two other groups. If eggs are collected from the wild brood fish, stocking parr would therefore not be a sensible option. Stocking smolts instead would create a net benefit in terms of the number of spawners, but this strategy has serious drawbacks and risks associated with the larger phenotypic and demographic divergences from wild salmon. Supplementation was shown not to be the key factor behind the recovery of the Tornionjoki and other northern Baltic salmon stocks. Instead, a combination of restrictions in the sea fishery and simultaneous occurrence of favourable natural conditions for survival were the main reasons for the revival in the 1990 s. This study questions the effectiveness of supplementation as a conservation management tool. The benefits of supplementation seem at best limited. Relatively high occurrences of reared fish in catches may generate false optimism concerning the effects of supplementation. Supplementation may lead to genetic risks due to problems in brood fish collection and artificial rearing with relaxed natural selection and domestication. Appropriate management of fisheries is the main alternative to supplementation, without which all other efforts for long-term maintenance of a healthy fish resource fail.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One major reason for the global decline of biodiversity is habitat loss and fragmentation. Conservation areas can be designed to reduce biodiversity loss, but as resources are limited, conservation efforts need to be prioritized in order to achieve best possible outcomes. The field of systematic conservation planning developed as a response to opportunistic approaches to conservation that often resulted in biased representation of biological diversity. The last two decades have seen the development of increasingly sophisticated methods that account for information about biodiversity conservation goals (benefits), economical considerations (costs) and socio-political constraints. In this thesis I focus on two general topics related to systematic conservation planning. First, I address two aspects of the question about how biodiversity features should be valued. (i) I investigate the extremely important but often neglected issue of differential prioritization of species for conservation. Species prioritization can be based on various criteria, and is always goal-dependent, but can also be implemented in a scientifically more rigorous way than what is the usual practice. (ii) I introduce a novel framework for conservation prioritization, which is based on continuous benefit functions that convert increasing levels of biodiversity feature representation to increasing conservation value using the principle that more is better. Traditional target-based systematic conservation planning is a special case of this approach, in which a step function is used for the benefit function. We have further expanded the benefit function framework for area prioritization to address issues such as protected area size and habitat vulnerability. In the second part of the thesis I address the application of community level modelling strategies to conservation prioritization. One of the most serious issues in systematic conservation planning currently is not the deficiency of methodology for selection and design, but simply the lack of data. Community level modelling offers a surrogate strategy that makes conservation planning more feasible in data poor regions. We have reviewed the available community-level approaches to conservation planning. These range from simplistic classification techniques to sophisticated modelling and selection strategies. We have also developed a general and novel community level approach to conservation prioritization that significantly improves on methods that were available before. This thesis introduces further degrees of realism into conservation planning methodology. The benefit function -based conservation prioritization framework largely circumvents the problematic phase of target setting, and allowing for trade-offs between species representation provides a more flexible and hopefully more attractive approach to conservation practitioners. The community-level approach seems highly promising and should prove valuable for conservation planning especially in data poor regions. Future work should focus on integrating prioritization methods to deal with multiple aspects in combination influencing the prioritization process, and further testing and refining the community level strategies using real, large datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ongoing rapid fragmentation of tropical forests is a major threat to global biodiversity. This is because many of the tropical forests are so-called biodiversity 'hotspots', areas that host exceptional species richness and concentrations of endemic species. Forest fragmentation has negative ecological and genetic consequences for plant survival. Proposed reasons for plant species' loss in forest fragments are, e.g., abiotic edge effects, altered species interactions, increased genetic drift, and inbreeding depression. To be able to conserve plants in forest fragments, the ecological and genetic processes that threaten the species have to be understood. That is possible only after obtaining adequate information on their biology, including taxonomy, life history, reproduction, and spatial and genetic structure of the populations. In this research, I focused on the African violet (genus Saintpaulia), a little-studied conservation flagship from the Eastern Arc Mountains and Coastal Forests hotspot of Tanzania and Kenya. The main objective of the research was to increase understanding of the life history, ecology and population genetics of Saintpaulia that is needed for the design of appropriate conservation measures. A further aim was to provide population-level insights into the difficult taxonomy of Saintpaulia. Ecological field work was conducted in a relatively little fragmented protected forest in the Amani Nature Reserve in the East Usambara Mountains, in northeastern Tanzania, complemented by population genetic laboratory work and ecological experiments in Helsinki, Finland. All components of the research were conducted with Saintpaulia ionantha ssp. grotei, which forms a taxonomically controversial population complex in the study area. My results suggest that Saintpaulia has good reproductive performance in forests with low disturbance levels in the East Usambara Mountains. Another important finding was that seed production depends on sufficient pollinator service. The availability of pollinators should thus be considered in the in situ management of threatened populations. Dynamic population stage structures were observed suggesting that the studied populations are demographically viable. High mortality of seedlings and juveniles was observed during the dry season but this was compensated by ample recruitment of new seedlings after the rainy season. Reduced tree canopy closure and substrate quality are likely to exacerbate seedling and juvenile mortality, and, therefore, forest fragmentation and disturbance are serious threats to the regeneration of Saintpaulia. Restoration of sufficient shade to enhance seedling establishment is an important conservation measure in populations located in disturbed habitats. Long-term demographic monitoring, which enables the forecasting of a population s future, is also recommended in disturbed habitats. High genetic diversities were observed in the populations, which suggest that they possess the variation that is needed for evolutionary responses in a changing environment. Thus, genetic management of the studied populations does not seem necessary as long as the habitats remain favourable for Saintpaulia. The observed high levels of inbreeding in some of the populations, and the reduced fitness of the inbred progeny compared to the outbred progeny, as revealed by the hand-pollination experiment, indicate that inbreeding and inbreeding depression are potential mechanisms contributing to the extinction of Saintpaulia populations. The relatively weak genetic divergence of the three different morphotypes of Saintpaulia ionantha ssp. grotei lend support to the hypothesis that the populations in the Usambara/lowlands region represent a segregating metapopulation (or metapopulations), where subpopulations are adapting to their particular environments. The partial genetic and phenological integrity, and the distinct trailing habit of the morphotype 'grotei' would, however, justify its placement in a taxonomic rank of its own, perhaps in a subspecific rank.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biological invasions are considered as one of the greatest threats to biodiversity, as they may lead to disruption and homogenization of natural communities, and in the worst case, to native species extinctions. The introduction of gene modified organisms (GMOs) to agricultural, fisheries and forestry practices brings them into contact with natural populations. GMOs may appear as new invasive species if they are able to (1) invade into natural habitats or (2) hybridize with their wild relatives. The benefits of GMOs, such as increased yield or decreased use of insecticides or herbicides in cultivation, may thus be reduced due the potential risks they may cause. A careful ecological risk analysis therefore has to precede any responsible GMO introduction. In this thesis I study ecological invasion in relation to GMOs, and what kind of consequences invasion may have in natural populations. A set of theoretical models that combine life-history evolution, population dynamics, and population genetics were developed for the hazard identification part of ecological risks assessment of GMOs. In addition, the potential benefits of GMOs in management of an invasive pest were analyzed. In the first study I showed that a population that is fluctuating due to scramble-type density dependence (due to, e.g., nutrient competition in plants) may be invaded by a population that is relatively more limited by a resource (e.g., light in plants) that is a cause of contest-type density dependence. This result emphasises the higher risk of invasion in unstable environments. The next two studies focused on escape of a growth hormone (GH) transgenic fish into a natural population. The results showed that previous models may have given too pessimistic a view of the so called Trojan gene -effect, where the invading genotype is harmful for the population as a whole. The previously suggested population extinctions did not occur in my studies, since the changes in mating preferences caused by the GH-fish were be ameliorated by decreased level of competition. The GH-invaders may also have to exceed a threshold density before invasion can be successful. I also showed that the prevalence of mature parr (aka. sneaker) strategy among GH-fish may have clear effect on invasion outcome. The fourth study assessed the risks and developed methods against the invasion of the Colorado Potato Beetle (CPB, Leptinotarsa decemlineata). I showed that the eradication of CPB is most important for the prevention of their establishment, but the cultivation of transgenic Bt-potato could also be effective. In general, my results emphasise that invasion of transgenic species or genotypes to be possible under certain realistic conditions and resulting in competitive exclusion, population decline through outbreeding depression and genotypic displacement of native species. Ecological risk assessment should regard the decline and displacement of the wild genotype by an introduced one as a consequence that is as serious as the population extinction. It will also be crucial to take into account different kinds of behavioural differences among species when assessing the possible hazards that GMOs may cause if escaped. The benefits found of GMO crops effectiveness in pest management may also be too optimistic since CPB may evolve resistance to Bt-toxin. The models in this thesis could be further applied in case specific risk assessment of GMOs by supplementing them with detailed data of the species biology, the effect of the transgene introduced to the species, and also the characteristics of the populations or the environments in the risk of being invaded.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extraintestinal pathogenic Escherichia coli (ExPEC) represent a diverse group of strains of E. coli, which infect extraintestinal sites, such as the urinary tract, the bloodstream, the meninges, the peritoneal cavity, and the lungs. Urinary tract infections (UTIs) caused by uropathogenic E. coli (UPEC), the major subgroup of ExPEC, are among the most prevalent microbial diseases world wide and a substantial burden for public health care systems. UTIs are responsible for serious morbidity and mortality in the elderly, in young children, and in immune-compromised and hospitalized patients. ExPEC strains are different, both from genetic and clinical perspectives, from commensal E. coli strains belonging to the normal intestinal flora and from intestinal pathogenic E. coli strains causing diarrhea. ExPEC strains are characterized by a broad range of alternate virulence factors, such as adhesins, toxins, and iron accumulation systems. Unlike diarrheagenic E. coli, whose distinctive virulence determinants evoke characteristic diarrheagenic symptoms and signs, ExPEC strains are exceedingly heterogeneous and are known to possess no specific virulence factors or a set of factors, which are obligatory for the infection of a certain extraintestinal site (e. g. the urinary tract). The ExPEC genomes are highly diverse mosaic structures in permanent flux. These strains have obtained a significant amount of DNA (predictably up to 25% of the genomes) through acquisition of foreign DNA from diverse related or non-related donor species by lateral transfer of mobile genetic elements, including pathogenicity islands (PAIs), plasmids, phages, transposons, and insertion elements. The ability of ExPEC strains to cause disease is mainly derived from this horizontally acquired gene pool; the extragenous DNA facilitates rapid adaptation of the pathogen to changing conditions and hence the extent of the spectrum of sites that can be infected. However, neither the amount of unique DNA in different ExPEC strains (or UPEC strains) nor the mechanisms lying behind the observed genomic mobility are known. Due to this extreme heterogeneity of the UPEC and ExPEC populations in general, the routine surveillance of ExPEC is exceedingly difficult. In this project, we presented a novel virulence gene algorithm (VGA) for the estimation of the extraintestinal virulence potential (VP, pathogenicity risk) of clinically relevant ExPECs and fecal E. coli isolates. The VGA was based on a DNA microarray specific for the ExPEC phenotype (ExPEC pathoarray). This array contained 77 DNA probes homologous with known (e.g. adhesion factors, iron accumulation systems, and toxins) and putative (e.g. genes predictably involved in adhesion, iron uptake, or in metabolic functions) ExPEC virulence determinants. In total, 25 of DNA probes homologous with known virulence factors and 36 of DNA probes representing putative extraintestinal virulence determinants were found at significantly higher frequency in virulent ExPEC isolates than in commensal E. coli strains. We showed that the ExPEC pathoarray and the VGA could be readily used for the differentiation of highly virulent ExPECs both from less virulent ExPEC clones and from commensal E. coli strains as well. Implementing the VGA in a group of unknown ExPECs (n=53) and fecal E. coli isolates (n=37), 83% of strains were correctly identified as extraintestinal virulent or commensal E. coli. Conversely, 15% of clinical ExPECs and 19% of fecal E. coli strains failed to raster into their respective pathogenic and non-pathogenic groups. Clinical data and virulence gene profiles of these strains warranted the estimated VPs; UPEC strains with atypically low risk-ratios were largely isolated from patients with certain medical history, including diabetes mellitus or catheterization, or from elderly patients. In addition, fecal E. coli strains with VPs characteristic for ExPEC were shown to represent the diagnostically important fraction of resident strains of the gut flora with a high potential of causing extraintestinal infections. Interestingly, a large fraction of DNA probes associated with the ExPEC phenotype corresponded to novel DNA sequences without any known function in UTIs and thus represented new genetic markers for the extraintestinal virulence. These DNA probes included unknown DNA sequences originating from the genomic subtractions of four clinical ExPEC isolates as well as from five novel cosmid sequences identified in the UPEC strains HE300 and JS299. The characterized cosmid sequences (pJS332, pJS448, pJS666, pJS700, and pJS706) revealed complex modular DNA structures with known and unknown DNA fragments arranged in a puzzle-like manner and integrated into the common E. coli genomic backbone. Furthermore, cosmid pJS332 of the UPEC strain HE300, which carried a chromosomal virulence gene cluster (iroBCDEN) encoding the salmochelin siderophore system, was shown to be part of a transmissible plasmid of Salmonella enterica. Taken together, the results of this project pointed towards the assumptions that first, (i) homologous recombination, even within coding genes, contributes to the observed mosaicism of ExPEC genomes and secondly, (ii) besides en block transfer of large DNA regions (e.g. chromosomal PAIs) also rearrangements of small DNA modules provide a means of genomic plasticity. The data presented in this project supplemented previous whole genome sequencing projects of E. coli and indicated that each E. coli genome displays a unique assemblage of individual mosaic structures, which enable these strains to successfully colonize and infect different anatomical sites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Essential thrombocythaemia (ET) is a myeloproliferative disease (MPD) characterized by thrombocytosis, i.e. a constant elevation of platelet count. Thrombocytosis may appear in MPDs (ET, polycythaemia vera, chronic myeloid leukaemia, myelofibrosis) and as a reactive phenomenon. The differential diagnosis of thrombocytosis is important, because the clinical course, need of therapy, and prognosis are different in patients with MPDs and in those with reactive thrombocytosis. ET patients may remain asymptomatic for years, but serious thrombohaemorrhagic and pregnancy-related complications may occur. The complications are difficult to predict. The aims of the present study were to evaluate the diagnostic findings, clinical course, and prognostic factors of ET. The present retrospective study consists of 170 ET patients. Two thirds had a platelet count < 1000 x 109/l. The diagnosis was supported by an increased number of megakaryocytes with an abnormal morphology in a bone marrow aspirate, aggregation defects in platelet function studies, and the presence of spontaneous erythroid and/or megakaryocytic colony formation in in vitro cultures of haematopoietic progenitors. About 70 % of the patients had spontaneous colony formation, while about 30 % had a normal growth pattern. Only a fifth of the patients remained asymptomatic. Half had a major thrombohaemorrhagic complication. The proportion of the patients suffering from thrombosis was as high as 45 %. About a fifth had major bleedings. Half of the patients had microvascular symptoms. Age over 60 years increased the risk of major bleedings, but the occurrence of thrombotic complications was similar in all age groups. Male gender, smoking in female patients, the presence of any spontaneous colony formation, and the presence of spontaneous megakaryocytic colony formation in younger patients were identified as risk factors for thrombosis. Pregnant ET patients had an increased risk of complications. Forty-five per cent of the pregnancies were complicated and 38 % of them ended in stillbirth. Treatment with acetylsalicylic acid alone or in combination with platelet lowering drugs improved the outcome of the pregnancy. The present findings about risk factors in ET as well as treatment outcome in the pregnancies of ET patients should be taken into account when planning treatment strategies for Finnish patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background. Cardiovascular disease (CVD) remains the most serious threat to life and health in industrialized countries. Atherosclerosis is the main underlying pathology associated with CVD, in particular coronary artery disease (CAD), ischaemic stroke, and peripheral arterial disease. Risk factors play an important role in initiating and accelerating the complex process of atherosclerosis. Most studies of risk factors have focused on the presence or absence of clinically defined CVD. Less is known about the determinants of the severity and extent of atherosclerosis in symptomatic patients. Aims. To clarify the association between coronary and carotid artery atherosclerosis, and to study the determinants associated with these abnormalities with special regard to novel cardiovascular risk factors. Subjects and methods. Quantitative coronary angiography (QCA) and B-mode ultrasound were used to assess coronary and carotid artery atherosclerosis in 108 patients with clinically suspected CAD referred for elective coronary angiography. To evaluate anatomic severity and extent of CAD, several QCA parameters were incorporated into indexes. These measurements reflected CAD severity, extent, and overall atheroma burden and were calculated for the entire coronary tree and separately for different coronary segments (i.e., left main, proximal, mid, and distal segments). Maximum and mean intima-media thickness (IMT) values of carotid arteries were measured and expressed as mean aggregate values. Furthermore, the study design included extensive fasting blood samples, oral glucose tolerance test, and an oral fat-load test to be performed in each participant. Results. Maximum and mean IMT values were significantly correlated with CAD severity, extent, and atheroma burden. There was heterogeneity in associations between IMT and CAD indexes according to anatomical location of CAD. Maximum and mean IMT values, respectively, were correlated with QCA indexes for mid and distal segments but not with the proximal segments of coronary vessels. The values of paraoxonase-1 (PON1) activity and concentration, respectively, were lower in subjects with significant CAD and there was a significant relationship between PON1 activity and concentration and coronary atherosclerosis assessed by QCA. PON1 activity was a significant determinant of severity of CAD independently of HDL cholesterol. Neither PON1 activity nor concentration was associated with carotid IMT. The concentration of triglycerides (TGs), triglyceride-rich lipoproteins (TRLs), oxidized LDL (oxLDL), and the cholesterol content of remnant lipoprotein particle (RLP-C) were significantly increased at 6 hours after intake of an oral fatty meal as compared with fasting values. The mean peak size of LDL remained unchanged 6 hours after the test meal. The correlations between total TGs, TRLs, and RLP-C in fasting and postprandial state were highly significant. RLP-C correlated with oxLDL both in fasting and in fed state and inversely with LDL size. In multivariate analysis oxLDL was a determinant of severity and extent of CAD. Neither total TGs, TRLs, oxLDL, nor LDL size were linked to carotid atherosclerosis. Insulin resistance (IR) was associated with an increased severity and extent of coronary atherosclerosis and seemed to be a stronger predictor of coronary atherosclerosis in the distal parts of the coronary tree than in the proximal and mid parts. In the multivariate analysis IR was a significant predictor of the severity of CAD. IR did not correlate with carotid IMT. Maximum and mean carotid IMT were higher in patients with the apoE4 phenotype compared with subjects with the apoE3 phenotype. Likewise, patients with the apoE4 phenotype had a more severe and extensive CAD than individuals with the apoE3 phenotype. Conclusions. 1) There is an association between carotid IMT and the severity and extent of CAD. Carotid IMT seems to be a weaker predictor of coronary atherosclerosis in the proximal parts of the coronary tree than in the mid and distal parts. 2) PON1 activity has an important role in the pathogenesis of coronary atherosclerosis. More importantly, the study illustrates how the protective role of HDL could be modulated by its components such that equivalent serum concentrations of HDL cholesterol may not equate with an equivalent, potential protective capacity. 3) RLP-C in the fasting state is a good marker of postprandial TRLs. Circulating oxLDL increases in CAD patients postprandially. The highly significant positive correlation between postprandial TRLs and postprandial oxLDL suggests that the postprandial state creates oxidative stress. Our findings emphasize the fundamental role of LDL oxidation in the development of atherosclerosis even after inclusion of conventional CAD risk factors. 4) Disturbances in glucose metabolism are crucial in the pathogenesis of coronary atherosclerosis. In fact, subjects with IR are comparable with diabetic subjects in terms of severity and extent of CAD. 5) ApoE polymorphism is involved in the susceptibility to both carotid and coronary atherosclerosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of the study was to evaluate gastrointestinal (GI) complications after kidney transplantation in the Finnish population. The adult patients included underwent kidney transplantation at Helsinki University Central Hospital in 1990-2000. Data on GI complications were collected from the Finnish Kidney Transplantation Registry, patient records and from questionnaires sent to patients. Helicobacter pylori IgG and IgA antibodies were measured from 500 patients before kidney transplantation and after a median 6.8-year follow up. Oesophagogastroduodenoscopy with biopsies was performed on 46 kidney transplantation patients suffering from gastroduodenal symptoms and 43 dyspeptic controls for studies of gastroduodenal cytomegalovirus (CMV) infection. Gallbladder ultrasound was performed on 304 patients after a median of 7.4 years post transplantation. Data from these 304 patients were also collected on serum lipids, body mass index and the use of statin medication. Severe GI complications occurred in 147 (10%) of 1515 kidney transplantations, 6% of them fatal after a median of 0.93 years. 51% of the complications occurred during the first post transplantation year, with highest incidence in gastroduodenal ulcers and complications of the colon. Patients with GI complications were older and had more delayed graft function and patients with polycystic kidney disease had more GI complications than the other patients. H.pylori seropositivity rate was 31% and this had no influence on graft or patient survival. 29% of the H.pylori seropositive patients seroreverted without eradication therapy. 74% of kidney transplantation patients had CMV specific matrix protein pp65 or delayed early protein p52 positive findings in the gastroduodenal mucosa, and 53% of the pp65 or p52 positive patients had gastroduodenal erosions without H.pylori findings. After the transplantation 165 (11%) patients developed gallstones. A biliary complication including 1 fatal cholecystitis developed in 15% of the patients with gallstones. 13 (0.9%) patients had pancreatitis. Colon perforations, 31% of them fatal, occurred in 16 (1%) patients. 13 (0.9%) developed a GI malignancy during the follow up. 2 H.pylori seropositive patients developed gastroduodenal malignancies during the follow up. In conclusion, severe GI complications usually occur early after kidney transplantation. Colon perforations are especially serious in kidney transplantation patients and colon diverticulosis and gallstones should be screened and treated before transplantation. When found, H.pylori infection should also be treated in these patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study is part of an ongoing collaborative bipolar research project, the Jorvi Bipolar Study (JoBS). The JoBS is run by the Department of Mental Health and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry, Jorvi Hospital, Helsinki University Central Hospital (HUCH), Espoo, Finland. It is a prospective, naturalistic cohort study of secondary level care psychiatric in- and outpatients with a new episode of bipolar disorder (BD). The second report also included 269 major depressive disorder (MDD) patients from the Vantaa Depression Study (VDS). The VDS was carried out in collaboration with the Department of Psychiatry of the Peijas Medical Care District. Using the Mood Disorder Questionnaire (MDQ), all in- and outpatients at the Department of Psychiatry at Jorvi Hospital who currently had a possible new phase of DSM-IV BD were sought. Altogether, 1630 psychiatric patients were screened, and 490 were interviewed using a semistructured interview (SCID-I/P). The patients included in the cohort (n=191) had at intake a current phase of BD. The patients were evaluated at intake and at 6- and 18-month interviews. Based on this study, BD is poorly recognized even in psychiatric settings. Of the BD patients with acute worsening of illness, 39% had never been correctly diagnosed. The classic presentations of BD with hospitalizations, manic episodes, and psychotic symptoms lead clinicians to correct diagnosis of BD I in psychiatric care. Time of follow-up elapsed in psychiatric care, but none of the clinical features, seemed to explain correct diagnosis of BD II, suggesting reliance on cross- sectional presentation of illness. Even though BD II was clearly less often correctly diagnosed than BD I, few other differences between the two types of BD were detected. BD I and II patients appeared to differ little in terms of clinical picture or comorbidity, and the prevalence of psychiatric comorbidity was strongly related to the current illness phase in both types. At the same time, the difference in outcome was clear. BD II patients spent about 40% more time depressed than BD I patients. Patterns of psychiatric comorbidity of BD and MDD differed somewhat qualitatively. Overall, MDD patients were likely to have more anxiety disorders and cluster A personality disorders, and bipolar patients to have more cluster B personality disorders. The adverse consequences of missing or delayed diagnosis are potentially serious. Thus, these findings strongly support the value of screening for BD in psychiatric settings, especially among the major depressive patients. Nevertheless, the diagnosis must be based on a clinical interview and follow-up of mood. Comorbidity, present in 59% of bipolar patients in a current phase, needs concomitant evaluation, follow-up, and treatment. To improve outcome in BD, treatment of bipolar depression is a major challenge for clinicians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atrial fibrillation is the most common arrhythmia requiring treatment. This Thesis investigated atrial fibrillation (AF) with a specific emphasis on atrial remodeling which was analysed from epidemiological, clinical and magnetocardiographic (MCG) perspectives. In the first study we evaluated in real-life clinical practice a population-based cohort of AF patients referred for their first elective cardioversion (CV). 183 consecutive patients were included of whom in 153 (84%) sinus rhythm (SR) was restored. Only 39 (25%) of those maintained SR for one year. Shorter duration of AF and the use of sotalol were the only characteristics associated with better restoration and maintenance of SR. During the one-year follow-up 40% of the patients ended up in permanent AF. Female gender and older age were associated with the acceptance of permanent AF. The LIFE-trial was a prospective, randomised, double-blinded study that evaluated losartan and atenolol in patients with hypertension and left ventricular hypertrophy (LVH). Of the 8,851 patients with SR at baseline and without a history of AF 371 patients developed new-onset AF during the study. Patients with new-onset AF had an increased risk of cardiac events, stroke, and increased rate of hospitalisation for heart failure. Younger age, female gender, lower systolic blood pressure, lesser LVH in ECG and randomisation to losartan therapy were independently associated with lower frequency of new-onset AF. The impact of AF on morbidity and mortality was evaluated in a post-hoc analysis of the OPTIMAAL trial that compared losartan with captopril in patients with acute myocardial infarction (AMI) and evidence of LV dysfunction. Of the 5,477 randomised patients 655 had AF at baseline, and 345 patients developed new AF during the follow-up period, median 3.0 years. Older patients and patients with signs of more serious heart disease had and developed AF more often. Patients with AF at baseline had an increased risk of mortality (hazard ratio (HR) of 1.32) and stroke (HR 1.77). New-onset AF was associated with increased mortality (HR 1.82) and stroke (HR of 2.29). In the fourth study we assessed the reproducibility of our MCG method. This method was used in the fifth study where 26 patients with persistent AF had immediately after the CV longer P-wave duration and higher energy of the last portion of atrial signal (RMS40) in MCG, increased P-wave dispersion in SAECG and decreased pump function of the atria as well as enlarged atrial diameter in echocardiography compared to age- and disease-matched controls. After one month in SR, P-wave duration in MCG still remained longer and left atrial (LA) diameter greater compared to the controls, while the other measurements had returned to the same level as in the control group. In conclusion is not a rare condition in either general population or patients with hypertension or AMI, and it is associated with increased risk of morbidity and mortality. Therefore, atrial remodeling that increases the likelihood of AF and also seems to be relatively stable has to be identified and prevented. MCG was found to be an encouraging new method to study electrical atrial remodeling and reverse remodeling. RAAS-suppressing medications appear to be the most promising method to prevent atrial remodeling and AF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intensive care is to be provided to patients benefiting from it, in an ethical, efficient, effective and cost-effective manner. This implies a long-term qualitative and quantitative analysis of intensive care procedures and related resources. The study population consists of 2709 patients treated in the general intensive care unit (ICU) of Helsinki University Hospital. Study sectors investigate intensive care patients mortality, quality of life (QOL), Quality-Adjusted Life-Years (QALY units) and factors related to severity of illness, length of stay (LOS), patient s age, evaluation period as well as experiences and memories connected with the ICU episode. In addition, the study examines the qualities of two QOL measures, the RAND 36 Item Health Survey 1.0 (RAND-36) and the 5 Item EuroQol-5D (EQ-5D) and assesses the correlation of the test results. Patients treated in 1995 responded to the RAND-36 questionnaire in 1996. All patients, treated from 1995-2000, received a QOL questionnaires in 2001, when 1 7 years had lapsed from the intensive treatment. Response rate was 79.5 %. Main Results 1) Of the patients who died within the first year (n = 1047) 66 % died during the intensive care period or within the following month. The non-survivors were more aged than the surviving patients, had generally a higher than average APACHE II and SOFA score depicting the severity of illness, their ICU LOS was longer and hospital stay shorter than of the surviving patients (p < 0.001). Mortality of patients receiving conservative treatment was higher than of those receiving surgical treatment. Patients replying to the QOL survey in 2001 (n = 1099) had recovered well: 97 % of those lived at home. More than half considered their QOL as good or extremely good, 40 % as satisfactory and 7 % as bad. All QOL indexes of those of working-age were considerably lower (p < 0.001) than comparable figures of the age- and gender-adjusted Finnish population. The 5-year monitoring period made evident that mental recovery was slower than physical recovery. 2) The results of RAND-36 and EQ-5D correlated well (p < 0.01). The RAND-36 profile measure distinguished more clearly between the different categories of QOL and their levels. EQ-5D measured well the patient groups general QOL and the sum index was used to calculate QALY units. 3) QALY units were calculated by multiplying the time the patient survived after ICU stay or expected life-years by the EQ-5D sum index. Aging automatically lowers the number of QALY units. Patients under the age of 65 receiving conservative treatment benefited from treatment to a greater extent measured in QALY units than their peers receiving surgical treatment, but in the age group 65 and over patients with surgical treatment received higher QALY ratings than recipients of conservative treatment. 4) The intensive care experience and QOL ratings were connected. The QOL indices were statistically highest for those recipients with memories of intensive care as a positive experience, albeit their illness requiring intensive care treatment was less serious than average. No statistically significant differences were found in the QOL indices of those with negative memories, no memories or those who did not express the quality of their experiences.