27 resultados para Quarter horse
em Helda - Digital Repository of University of Helsinki
Resumo:
The thesis concentrates on two questions: the translation of metaphors in literary texts, and the use of semiotic models and tools in translation studies. The aim of the thesis is to present a semiotic, text based model designed to ease the translation of metaphors and to analyze translated metaphors. In the translation of metaphors I will concentrate on the central problem of metaphor translations: in addition to its denotation and connotation, a single metaphor may contain numerous culture or genre specific meanings. How can a translator ensure the translation of all meanings relevant to the text as a whole? I will approach the question from two directions. Umberto Eco's holistic text analysis model provides an opportunity to concentrate on the problematic nature of metaphor translation from the level of a text as a specific entity, while George Lakoff's and Mark Johnson's metaphor research makes it possible to approach the question from the level of individual metaphors. On the semiotic side, the model utilizes Eero Tarasti's existential semiotics supported by Algirdas Greimas' actant model and Yuri Lotman's theory of cultural semiotics. In the model introduced in the thesis, individual texts are deconstructed through Eco's model into elements. The textual roles and features of these elements are distilled further through Tarasti's model into their coexistent meaning levels. The priorization and analysis of these meaning levels provide an opportunity to consider the contents and significance of specific metaphors in relation to the needs of the text as a whole. As example texts, I will use Motörhead's hard rock classic Iron Horse/Born to Lose and its translation into Rauta-airot by Viikate. I will use the introduced model to analyze the metaphors in the source and target texts, and to consider the transfer of culture specific elements between the languages and cultural borders. In addition, I will use the analysis process to examine the validity of the model introduced in the thesis.
Resumo:
Monocarboxylate transporters (MCTs) transport lactate and protons across cell membranes. During intense exercise, lactate and protons accumulate in the exercising muscle and are transported to the plasma. In the horse, MCTs are responsible for the majority of lactate and proton removal from exercising muscle, and are therefore also the main mechanism to hinder the decline in pH in muscle cells. Two isoforms, MCT1 and MCT4, which need an ancillary protein CD147, are expressed in equine muscle. In the horse, as in other species, MCT1 is predominantly expressed in oxidative fibres, where its likely role is to transport lactate into the fibre to be used as a fuel at rest and during light work, and to remove lactate during intensive exercise when anaerobic energy production is needed. The expression of CD147 follows the fibre type distribution of MCT1. These proteins were detected in both the cytoplasm and sarcolemma of muscle cells in the horse breeds studied: Standardbred and Coldblood trotters. In humans, training increases the expression of both MCT1 and MCT4. In this study, the proportion of oxidative fibres in the muscle of Norwegian-Swedish Coldblood trotters increased with training. Simultaneously, the expression of MCT1 and CD147, measured immunohistochemically, seemed to increase more in the cytoplasm of oxidative fibres than in the fast fibre type IIB. Horse MCT4 antibody failed to work in immunohistochemistry. In the future, a quantitative method should be introduced to examine the effect of training on muscle MCT expression in the horse. Lactate can be taken up from plasma by red blood cells (RBCs). In horses, two isoforms, MCT1 and MCT2, and the ancillary protein CD147 are expressed in RBC membranes. The horse is the only species studied in which RBCs have been found to express MCT2, and the physiological role of this protein in RBCs is unknown. The majority of horses express all three proteins, but 10-20% of horses express little or no MCT1 or CD147. This leads to large interindividual variation in the capacity to transport lactate into RBCs. Here, the expression level of MCT1 and CD147 was bimodally distributed in three studied horse breeds: Finnhorse, Standardbred and Thoroughbred. The level of MCT2 expression was distributed unimodally. The expression level of lactate transporters could not be linked to performance markers in Thoroughbred racehorses. In the future, better performance indexes should be developed to better enable the assessment of whether the level of MCT expression affects athletic performance. In human subjects, several mutations in MCT1 have been shown to cause decreased lactate transport activity in muscle and signs of myopathy. In the horse, two amino acid sequence variations, one of which was novel, were detected in MCT1 (V432I and K457Q). The mutations found in horses were in different areas compared to mutations found in humans. One mutation (M125V) was detected in CD147. The mutations found could not be linked with exercise-induced myopathy. MCT4 cDNA was sequenced for the first time in the horse, but no mutations could be detected in this protein.
Resumo:
Earlier studies have shown that the speed of information transmission developed radically during the 19th century. The fast development was mainly due to the change from sailing ships and horse-driven coaches to steamers and railways, as well as the telegraph. Speed of information transmission has normally been measured by calculating the duration between writing and receiving a letter, or between an important event and the time when the news was published elsewhere. As overseas mail was generally carried by ships, the history of communications and maritime history are closely related. This study also brings a postal historical aspect to the academic discussion. Additionally, there is another new aspect included. In business enterprises, information flows generally consisted of multiple transactions. Although fast one-way information was often crucial, e.g. news of a changing market situation, at least equally important was that there was a possibility to react rapidly. To examine the development of business information transmission, the duration of mail transport has been measured by a systematic and commensurable method, using consecutive information circles per year as the principal tool for measurement. The study covers a period of six decades, several of the world's most important trade routes and different mail-carrying systems operated by merchant ships, sailing packets and several nations' steamship services. The main sources have been the sailing data of mail-carrying ships and correspondence of several merchant houses in England. As the world's main trade routes had their specific historical backgrounds with different businesses, interests and needs, the systems for information transmission did not develop similarly or simultaneously. It was a process lasting several decades, initiated by the idea of organizing sailings in a regular line system. The evolution proceeded generally as follows: originally there was a more or less irregular system, then a regular system and finally a more frequent regular system of mail services. The trend was from sail to steam, but both these means of communication improved following the same scheme. Faster sailings alone did not radically improve the number of consecutive information circles per year, if the communication was not frequent enough. Neither did improved frequency advance the information circulation if the trip was very long or if the sailings were overlapping instead of complementing each other. The speed of information transmission could be improved by speeding up the voyage itself (technological improvements, minimizing the waiting time at ports of call, etc.) but especially by organizing sailings so that the recipients had the possibility to reply to arriving mails without unnecessary delay. It took two to three decades before the mail-carrying shipping companies were able to organize their sailings in an optimal way. Strategic shortcuts over isthmuses (e.g. Panama, Suez) together with the cooperation between steamships and railways enabled the most effective improvements in global communications before the introduction of the telegraph.
Resumo:
The thesis addresses the problem of Finnish Iron Age bells, pellet bells and bell pendants, previously unexplored musical artefacts from 400–1300 AD. The study, which contributes to the field of music archaeology, aims to provide a gateway to ancient soundworlds and ideas of music making. The research questions include: Where did these metal artefacts come from? How did they sound? How were they used? What did their sound mean to the people of the Iron Age? The data collected at the National Museum of Finland and at several provincial museums covers a total of 486 bells, pellet bells and bell pendants. By means of a cluster analysis, each category was divided into several subgroups. The subgroups, which all seem to have a different dating and geographical distribution, represent a spread of both local and international manufacturing traditions. According to an elemental analysis, the material varies from iron to copper-tin, copper-lead and copper-tin-lead alloys. Clappers, pellets and pebbles prove that the bells and pellet bells were indisputably instruments intended for sound production. Clusters of small bell pendants, however, probably produced sound by jingling against each other. Spectrogram plots reveal that the partials of the still audible sounds range from 1 000 to 19 850 Hz. On the basis of 129 inhumation graves, hoards, barrows and stray finds, it seems evident that the bells, pellet bells and bell pendants were fastened to dresses and horse harnesses or carried in pouches and boxes. The resulting acoustic spaces could have been employed in constructing social hierarchies, since the instruments usually appear in richly furnished graves. Furthermore, the instruments repeatedly occur with crosses, edge tools and zoomorphic pendants that in the later Finnish-Karelian culture were regarded as prophylactic amulets. In the Iron Age as well as in later folk culture, the bell sounds seem to have expressed territorial, social and cosmological boundaries.
Resumo:
The aim of this study is to examine the relationship of the Roman villa to its environment. The villa was an important feature of the countryside intended both for agricultural production and for leisure. Manuals of Roman agriculture give instructions on how to select a location for an estate. The ideal location was a moderate slope facing east or south in a healthy area and good neighborhood, near good water resources and fertile soils. A road or a navigable river or the sea was needed for transportation of produce. A market for selling the produce, a town or a village, should have been nearby. The research area is the surroundings of the city of Rome, a key area for the development of the villa. The materials used consist of archaeological settlement sites, literary and epigraphical evidence as well as environmental data. The sites include all settlement sites from the 7th century BC to 5th century AD to examine changes in the tradition of site selection. Geographical Information Systems were used to analyze the data. Six aspects of location were examined: geology, soils, water resources, terrain, visibility/viewability and relationship to roads and habitation centers. Geology was important for finding building materials and the large villas from the 2nd century BC onwards are close to sources of building stones. Fertile soils were sought even in the period of the densest settlement. The area is rich in water, both rainfall and groundwater, and finding a water supply was fairly easy. A certain kind of terrain was sought over very long periods: a small spur or ridge shoulder facing preferably south with an open area in front of the site. The most popular villa resorts are located on the slopes visible from almost the entire Roman region. A visible villa served the social and political aspirations of the owner, whereas being in the villa created a sense of privacy. The area has a very dense road network ensuring good connectivity from almost anywhere in the region. The best visibility/viewability, dense settlement and most burials by roads coincide, creating a good neighborhood. The locations featuring the most qualities cover nearly a quarter of the area and more than half of the settlement sites are located in them. The ideal location was based on centuries of practical experience and rationalized by the literary tradition.
Resumo:
The study deals with the dating and the function of the fortress of Agios Donatos, which is located in the Kokytos river valley in Thesprotia, northwestern Greece. To solve the dating problem one had to use parallels found in research literature, preferably within as close range as possible. As most of the fortresses within a close proximity had not been adequately published, one had to use parallels within a larger area, throughout the Hellenistic world. Archaeological material found in trial trenches on site was used when possible. To think of the function of the site one had to study the site itself and its relation to the environment, and to compare the situation with the parallels found in research literature, mostly different archaeological survey projects in Greece and Asia Minor. The fortress was built during a period ranging from the final decades of the fourth century down to the mid-third century BC. Most likely it was built during the first quarter of the third century, that is the reign of Pyrrhus of Epirus when the area experienced a boom in fortress building activity. The function of the site was most likely to protect a trade route from the major port in the area to the central areas of the valley and on to Kalamas river, which was the major route on to Macedon. In the end an appendix dealing with the other fortified sites in Thesprotia has been compiled.
Resumo:
In my master thesis I analyse Byzantine warfare in the late period of the empire. I use military operations between Byzantines and crusader Principality of Achaia (1259–83) as a case study. Byzantine strategy was based (in “oriental manner”) on using ambushes, diplomacy, surprise attacks, deception etc. Open field battles that were risky in comparison with their benefits were usually avoided, but the Byzantines were sometimes forced to seek open encounter because their limited ability to keep strong armies in field for long periods of time. Foreign mercenaries had important place in Byzantine armies and they could simply change sides if their paymasters ran out of resources. The use of mercenaries with short contracts made it possible that the composition of an army was flexible but on the other hand heterogeneous – in result Byzantine armies were sometimes ineffective and prone to confusion. In open field battles Byzantines used formation that was made out from several lines placed one after another. This formation was especially suitable for cavalry battles. Byzantines might have also used other kinds of formations. The Byzantines were not considered equal to Latins in close combat. West-Europeans saw mainly horse archers and Latin mercenaries on Byzantine service as threats to themselves in battle. The legitimacy of rulers surrounding the Aegean sea was weak and in many cases political intrigues and personal relationships can have resolved the battles. Especially in sieges the loyalty of population was decisive. In sieges the Byzantines used plenty of siege machines and archers. This made fast conquests possible, but it was expensive. The Byzantines protected their frontiers by building castles. Military operations against the Principality of Achaia were mostly small scale raids following an intensive beginning. Byzantine raids were mostly made by privateers and mountaineers. This does not fit to the traditional picture that warfare belonged to the imperial professional army. It’s unlikely that military operations in war against the Principality of Achaia caused great demographic or economic catastrophe and some regions in the warzone might even have flourished. On the other hand people started to concentrate into villages which (with growing risks for trade) probably caused disturbance in economic development and in result birth rates might have decreased. Both sides of war sought to exchange their prisoners of war. These were treated according to conventional manners that were accepted by both sides. It was possible to sell prisoners, especially women and children, to slavery, but the scale of this trade does not seem to be great in military operations treated in this theses.
Resumo:
Alzheimer's disease (AD) is characterized by an impairment of the semantic memory responsible for processing meaning-related knowledge. This study was aimed at examining how Finnish-speaking healthy elderly subjects (n = 30) and mildly (n=20) and moderately (n = 20) demented AD patients utilize semantic knowledge to performa semantic fluency task, a method of studying semantic memory. In this task subjects are typically given 60 seconds to generate words belonging to the semantic category of animals. Successful task performance requires fast retrieval of subcategory exemplars in clusters (e.g., farm animals: 'cow', 'horse', 'sheep') and switching between subcategories (e.g., pets, water animals, birds, rodents). In this study, thescope of the task was extended to cover various noun and verb categories. The results indicated that, compared with normal controls, both mildly and moderately demented AD patients showed reduced word production, limited clustering and switching, narrowed semantic space, and an increase in errors, particularly perseverations. However, the size of the clusters, the proportion of clustered words, and the frequency and prototypicality of words remained relatively similar across the subject groups. Although the moderately demented patients showed a poor eroverall performance than the mildly demented patients in the individual categories, the error analysis appeared unaffected by the severity of AD. The results indicate a semantically rather coherent performance but less specific, effective, and flexible functioning of the semantic memory in mild and moderate AD patients. The findings are discussed in relation to recent theories of word production and semantic representation. Keywords: semantic fluency, clustering, switching, semantic category, nouns, verbs, Alzheimer's disease
Resumo:
This research discusses decoupling CAP (Common Agricultural Policy) support and impacts which may occur on grain cultivation area and supply of beef and pork in Finland. The study presents the definitions and studies on decoupled agricultural subsidies, the development of supply of grain, beef and pork in Finland and changes in leading factors affecting supply between 1970 and 2005. Decoupling agricultural subsidies means that the linkage between subsidies and production levels is disconnected; subsidies do not affect the amount produced. The hypothesis is that decoupling will decrease the amounts produced in agriculture substantially. In the supply research, the econometric models which represent supply of agricultural products are estimated based on the data of prices and amounts produced. With estimated supply models, the impacts of changes in prices and public policies, can be forecasted according to supply of agricultural products. In this study, three regression models describing combined cultivation areas of rye, wheat, oats and barley, and the supply of beef and pork are estimated. Grain cultivation area and supply of beef are estimated based on data from 1970 to 2005 and supply of pork on data from 1995 to 2005. The dependencies in the model are postulated to be linear. The explanatory variables in the grain model were average return per hectare, agricultural subsidies, grain cultivation area in the previous year and the cost of fertilization. The explanatory variables in the beef model were the total return from markets and subsidies and the amount of beef production in the previous year. In the pork model the explanatory variables were the total return, the price of piglet, investment subsidies, trend of increasing productivity and the dummy variable of the last quarter of the year. The R-squared of model of grain cultivation area was 0,81, the model of beef supply 0,77 and the model of pork supply 0,82. Development of grain cultivation area and supply of beef and pork was estimated for 2006 - 2013 with this regression model. In the basic scenario, development of explanatory variables in 2006 - 2013 was postulated to be the same as they used to be in average in 1995 - 2005. After the basic scenario the impacts of decoupling CAP subsidies and domestic subsidies on cultivation area and supply were simulated. According to the results of the decoupling CAP subsidies scenario, grain cultivation area decreases from 1,12 million hectares in 2005 to 1,0 million hectares in 2013 and supply of beef from 88,8 million kilos in 2005 to 67,7 million kilos in 2013. Decoupling domestic and investment subsidies will decrease the supply of pork from 194 million kilos in 2005 to 187 million kilos in 2006. By 2013 the supply of pork grows into 203 million kilos.
Resumo:
The Vantaa Primary Care Depression Study (PC-VDS) is a naturalistic and prospective cohort study concerning primary care patients with depressive disorders. It forms a collaborative research project between the Department of Mental and Alcohol Research of the National Public Health Institute, and the Primary Health Care Organization of the City of Vantaa. The aim is to obtain a comprehensive view on clinically significant depression in primary care, and to compare depressive patients in primary care and in secondary level psychiatric care in terms of clinical characteristics. Consecutive patients (N=1111) in three primary care health centres were screened for depression with the PRIME-MD, and positive cases interviewed by telephone. Cases with current depressive symptoms were diagnosed face-to-face with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I/P). A cohort of 137 patients with unipolar depressive disorders, comprising all patients with at least two depressive symptoms and clinically significant distress or disability, was recruited. The Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II), medical records, rating scales, interview and a retrospective life-chart were used to obtain comprehensive cross-sectional and retrospective longitudinal information. For investigation of suicidal behaviour the Scale for Suicidal Ideation (SSI), patient records and the interview were used. The methodology was designed to be comparable to The Vantaa Depression Study (VDS) conducted in secondary level psychiatric care. Comparison of major depressive disorder (MDD) patients aged 20-59 from primary care in PC-VDS (N=79) was conducted with new psychiatric outpatients (N =223) and inpatients (N =46) in VDS. The PC-VDS cohort was prospectively followed up at 3, 6 and 18 months. Altogether 123 patients (90%) completed the follow-up. Duration of the index episode and the timing of relapses or recurrences were examined using a life-chart. The retrospective investigation revealed current MDD in most (66%), and lifetime MDD in nearly all (90%) cases of clinically significant depressive syndromes. Two thirds of the “subsyndromal” cases had a history of major depressive episode (MDE), although they were currently either in partial remission or a potential prodromal phase. Recurrences and chronicity were common. The picture of depression was complicated by Axis I co-morbidity in 59%, Axis II in 52% and chronic Axis III disorders in 47%; only 12% had no co-morbidity. Within their lifetimes, one third (37%) had seriously considered suicide, and one sixth (17%) had attempted it. Suicidal behaviour clustered in patients with moderate to severe MDD, co-morbidity with personality disorders, and a history of treatment in psychiatric care. The majority had received treatment for depression, but suicidal ideation had mostly remained unrecognised. The comparison of patients with MDD in primary care to those in psychiatric care revealed that the majority of suicidal or psychotic patients were receiving psychiatric treatment, and the patients with the most severe symptoms and functional limitations were hospitalized. In other clinical aspects, patients with MDD in primary care were surprisingly similar to psychiatric outpatients. Mental health contacts earlier in the current MDE were common among primary care patients. The 18-month prospective investigation with a life-chart methodology verified the chronic and recurrent nature of depression in primary care. Only one-quarter of patients with MDD achieved and maintained full remission during the follow-up, while another quarter failed to remit at all. The remaining patients suffered either from residual symptoms or recurrences. While severity of depression was the strongest predictor of recovery, presence of co-morbid substance use disorders, chronic medical illness and cluster C personality disorders all contributed to an adverse outcome. In clinical decision making, beside severity of depression and co-morbidity, history of previous MDD should not be ignored by primary care doctors while depression there is usually severe enough to indicate at least follow-up, and concerning those with residual symptoms, evaluation of their current treatment. Moreover, recognition of suicidal behaviour among depressed patients should also be improved. In order to improve outcome of depression in primary care, the often chronic and recurrent nature of depression should be taken into account in organizing the care. According to literature management programs of a chronic disease, with enhancement of the role of case managers and greater integration of primary and specialist care, have been successful. Optimum ways of allocating resources between treatment providers as well as within health centres should be found.
Resumo:
Quality of life (QoL) and Health-related quality of life (HRQoL) are becoming one of the key outcomes of health care due to increased respect for the subjective valuations and well-being of patients and an increasing part of the ageing population living with chronic, non-fatal conditions. Preference-based HRQoL measures enable estimation of health utility, which can be useful for rational rationing, evidence-based medicine and health policy. This study aimed to compare the individual severity and public health burden of major chronic conditions in Finland, including and focusing on reliably diagnosed psychiatric conditions. The study is based on the Health 2000 survey, a representative general population survey of 8028 Finns aged 30 and over. Depressive, anxiety and alcohol use disorders were diagnosed with the Composite International Diagnostic Interview (M-CIDI). HRQoL was measured with the 15D and the EQ-5D, with 83% response rate. This study found that people with psychiatric disorders had the lowest 15D HRQoL scores at all ages, in comparison to other main groups of chronic conditions. Considering 29 individual conditions, three of the four most severe (on 15D) were psychiatric disorders; the most severe was Parkinson s disease. Of the psychiatric disorders, chronic conditions that have sometimes been considered relatively mild - dysthymia, agoraphobia, generalized anxiety disorder and social phobia - were found to be the most severe. This was explained both by the severity of the impact of these disorders on mental health domains of HRQoL, and also by the fact that decreases were widespread on most dimensions of HRQoL. Considering the public health burden of conditions, musculoskeletal disorders were associated with the largest burden, followed by psychiatric disorders. Psychiatric disorders were associated with the largest burden at younger ages. Of individual conditions, the largest burden found was for depressive disorders, followed by urinary incontinence and arthrosis of the hip and knee. The public health burden increased greatly with age, so the ageing of the Finnish population will mean that the disease burden caused by chronic conditions will increase by a quarter up to year 2040, if morbidity patterns do not change. Investigating alcohol consumption and HRQoL revealed that although abstainers had poorer HRQoL than moderate drinkers, this was mainly due to many abstainers being former drinkers and having the poorest HRQoL. Moderate drinkers did not have significantly better HRQoL than abstainers who were not former drinkers. Psychiatric disorders are associated with a large part of the non-fatal disease burden in Finland. In particular anxiety disorders appear to be more severe and have a larger public health burden than previously thought.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
The aim of this study was to estimate the development of fertility in North-Central Namibia, former Ovamboland, from 1960 to 2001. Special attention was given to the onset of fertility decline and to the impact of the HIV epidemic on fertility. An additional aim was to introduce parish registers as a source of data for fertility research in Africa. Data used consisted of parish registers from Evangelical Lutheran congregations, the 1991 and 2001 Population and Housing Censuses, the 1992 and 2000 Namibia Demographic and Health Surveys, and the HIV sentinel surveillances of 1992-2004. Both period and cohort fertility were analysed. The P/F ratio method was used when analysing census data. The impact of HIV infection on fertility was estimated indirectly by comparing the fertility histories of women who died at an age of less than 50 years with the fertility of other women. The impact of the HIV epidemic on fertility was assessed both among infected women and in the general population. Fertility in the study population began to decline in 1980. The decline was rapid during the 1980s, levelled off in the early 1990s at the end of war of independence and then continued to decline until the end of the study period. According to parish registers, total fertility was 6.4 in the 1960s and 6.5 in the 1970s, and declined to 5.1 in the 1980s and 4.2 in the 1990s. Adjustment of these total fertility rates to correspond to levels of fertility based on data from the 1991 and 2001 censuses resulted in total fertility declining from 7.6 in 1960-79 to 6.0 in 1980-89, and to 4.9 in 1990-99. The decline was associated with increased age at first marriage, declining marital fertility and increasing premarital fertility. Fertility among adolescents increased, whereas the fertility of women in all other age groups declined. During the 1980s, the war of independence contributed to declining fertility through spousal separation and delayed marriages. Contraception has been employed in the study region since the 1980s, but in the early 1990s, use of contraceptives was still so limited that fertility was higher in North-Central Namibia than in other regions of the country. In the 1990s, fertility decline was largely a result of the increased prevalence of contraception. HIV prevalence among pregnant women increased from 4% in 1992 to 25% in 2001. In 2001, total fertility among HIV-infected women (3.7) was lower than that among other women (4.8), resulting in total fertility of 4.4 among the general population in 2001. The HIV epidemic explained more than a quarter of the decline in total fertility at population level during most of the 1990s. The HIV epidemic also reduced the number of children born by reducing the number of potential mothers. In the future, HIV will have an extensive influence on both the size and age structure of the Namibian population. Although HIV influences demographic development through both fertility and mortality, the effect through changes in fertility will be smaller than the effect through mortality. In the study region, as in some other regions of southern Africa, a new type of demographic transition is under way, one in which population growth stagnates or even reverses because of the combined effects of declining fertility and increasing mortality, both of which are consequences of the HIV pandemic.
Resumo:
The study in its entirety focused on factors related to adolescents decisions concerning drug use. The term drug use is taken here to include the use of tobacco products, alcohol, narcotics, and other addictive substances. First, the reasons given for drug use (attributions) were investigated. Secondly, the influence of personal goals, the beliefs involved in decision making, psychosocial adjustment including body image and involvement with peers, and parental relationships on drug use were studied. Two cohorts participated in the study. In 1984, a questionnaire on reasons for drug use was administered to a sample of adolescents aged 14-16 (N=396). A further questionnaire was administered to another sample of adolescents aged 14-16 (N=488) in 1999. The results for both cohorts were analyzed in Articles I and II. In Articles III and IV further analysis was carried out on the second cohort (N=488). The research report presented here provides a synthesis of all four articles, together with material from a further analysis. In a comparison of the two cohorts it was found that the attributions for drug use had changed considerably over the intervening fifteen-year period. In relation to alcohol and narcotics use an increase was found in reasons involving inner subjective experiences, with mention of the good feeling and fun resulting from alcohol and narcotics use. In addition, the goals of alcohol consumption were increasingly perceived as drinking to get drunk, and for its own sake. The attributions for the adolescents own smoking behavior were quite different from the attributions for smoking by others. The attributions were only weakly influenced by the participants gender or by their smoking habits, either in 1984 or 1999. In relation to participants own smoking, the later questionnaire elicited more mention of inner subjective experiences involving "good feeling. In relation to the perceived reasons for other people s smoking, it elicited more responses connected with the notion of "belonging. In the second sample, the results indicated that the levels of body satisfaction among adolescent girls are lower than those among adolescent boys. Overall, dissatisfaction with one's physical appearance seemed to relate to drug use. Girls were also found to engage in more discussions than boys; this applied to (i) discussion with peers (concerning both intimate and general matters), and (ii) discussion with parents (concerning general matters). However, more than a quarter of the boys (out of the entire population) reported only low intimacy with both parents and peers. If both drinking and smoking were considered, it seemed that girls in particular who reported drinking and smoking also reported high intimacy with parents and peers. Boys who reported drinking and smoking reported only medium intimacy with parents and peers. In addition, having an intimate relationship with one's peers was associated with a greater tendency to drink purely in order to get drunk. Overall, the results seemed to suggest that drug use is connected with a close relationship with peers and (surprisingly) with a close relationship with parents. Nevertheless, there were also indications that to some extent peer relationships can also protect adolescents from smoking and alcohol use. The results, which underline the complexity of adolescent drug use, are taken up in the Discussion section. It may be that body image and/or other identity factors play a more prominent role in all drug use than has previously been acknowledged. It does appear that in the course of planning support campaigns for adolescents at risk of drug use, we should focus more closely on individuals and their inner world. More research on this field is clearly needed, and therefore some ideas for future research are also presented.
Resumo:
Life of children exposed to alcohol or drugs in utero This study focused on the growth environment, physical development and socio-emotional development of children, aged 16 and under, who had been exposed to alcohol (n=78) or drugs (n=15) in utero. The aim of the study was to obtain a comprehensive picture of the living conditions of these children and to examine the role of the growth environment in their development. The study was carried out using questionnaires, written life stories and interviews. Attachment theory was used as a background theory in the study. Over half of the children exposed to alcohol were diagnosed with foetal alcohol syndrome (FAS), one quarter was diagnosed with foetal alcohol effects (FAE), and one fifth had no diagnosis. Most of the children exposed to drugs had been exposed to either amphetamines or cannabis, and a smaller number to heroin. Some of the children exposed to alcohol were mentally handicapped or intellectually impaired. The children exposed to drugs did not exhibit any serious learning difficulties but a considerable number of them had socio-emotional development problems. Language and speech problems and attention, concentration and social interaction problems were typical among both the children exposed to alcohol and those exposed to drugs. Only one child had been placed into long-term foster care in a family immediately after leaving the maternity hospital. In biological families there had been neglect, violence, mental health problems, crime and unemployment, and many parents were already dead. Two of the children had been sexually abused and four were suspected of having been abused. From the point of view of the children's development, the three most critical issues were 1) the range of illnesses and handicaps that had impaired their functional capacity as a result of their prenatal exposure to alcohol, 2) child's age at the time of placement on a long-term basis, and 3) the number of their traumatic experiences. The relationship with their biological parents after placement also played a role. Children with symptoms were found in all diagnosis categories and types of exposure. Children with the smallest number of symptoms were found among those who had never lived with their biological parents. Almost all children were exhibiting strong symptoms at the time of placement in foster care. In most cases, they were behaving in a disorderly manner towards others, but some children were withdrawn. The most conspicuous feature among those with the most severe symptoms was their disorganized behaviour. Placement in a foster family enhanced the children's development, but did not solve the problems. The foster parents who brought these children up did not receive as much therapy for the children and support for the upbringing as they appear to have needed. In Finland, transfer to long-term custody is based on strict criteria. The rights of children prescribed in the child protection law are not fulfilled in practice. Key words: FASD, FAS, FAE, alcohol exposure, drugs exposure, illegal drugs, early interaction, child development, attachment