57 resultados para Prescribed mean-curvature problem

em Helda - Digital Repository of University of Helsinki


Relevância:

40.00% 40.00%

Publicador:

Resumo:

During the last decades mean-field models, in which large-scale magnetic fields and differential rotation arise due to the interaction of rotation and small-scale turbulence, have been enormously successful in reproducing many of the observed features of the Sun. In the meantime, new observational techniques, most prominently helioseismology, have yielded invaluable information about the interior of the Sun. This new information, however, imposes strict conditions on mean-field models. Moreover, most of the present mean-field models depend on knowledge of the small-scale turbulent effects that give rise to the large-scale phenomena. In many mean-field models these effects are prescribed in ad hoc fashion due to the lack of this knowledge. With large enough computers it would be possible to solve the MHD equations numerically under stellar conditions. However, the problem is too large by several orders of magnitude for the present day and any foreseeable computers. In our view, a combination of mean-field modelling and local 3D calculations is a more fruitful approach. The large-scale structures are well described by global mean-field models, provided that the small-scale turbulent effects are adequately parameterized. The latter can be achieved by performing local calculations which allow a much higher spatial resolution than what can be achieved in direct global calculations. In the present dissertation three aspects of mean-field theories and models of stars are studied. Firstly, the basic assumptions of different mean-field theories are tested with calculations of isotropic turbulence and hydrodynamic, as well as magnetohydrodynamic, convection. Secondly, even if the mean-field theory is unable to give the required transport coefficients from first principles, it is in some cases possible to compute these coefficients from 3D numerical models in a parameter range that can be considered to describe the main physical effects in an adequately realistic manner. In the present study, the Reynolds stresses and turbulent heat transport, responsible for the generation of differential rotation, were determined along the mixing length relations describing convection in stellar structure models. Furthermore, the alpha-effect and magnetic pumping due to turbulent convection in the rapid rotation regime were studied. The third area of the present study is to apply the local results in mean-field models, which task we start to undertake by applying the results concerning the alpha-effect and turbulent pumping in mean-field models describing the solar dynamo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines both theoretically an empirically how well the theories of Norman Holland, David Bleich, Wolfgang Iser and Stanley Fish can explain readers' interpretations of literary texts. The theoretical analysis concentrates on their views on language from the point of view of Wittgenstein's Philosophical Investigations. This analysis shows that many of the assumptions related to language in these theories are problematic. The empirical data show that readers often form very similar interpretations. Thus the study challenges the common assumption that literary interpretations tend to be idiosyncratic. The empirical data consists of freely worded written answers to questions on three short stories. The interpretations were made by 27 Finnish university students. Some of the questions addressed issues that were discussed in large parts of the texts, some referred to issues that were mentioned only in passing or implied. The short stories were "The Witch à la Mode" by D. H. Lawrence, "Rain in the Heart" by Peter Taylor and "The Hitchhiking Game" by Milan Kundera. According to Fish, readers create both the formal features of a text and their interpretation of it according to an interpretive strategy. People who agree form an interpretive community. However, a typical answer usually contains ideas repeated by several readers as well as observations not mentioned by anyone else. Therefore it is very difficult to determine which readers belong to the same interpretive community. Moreover, readers with opposing opinions often seem to pay attention to the same textual features and even acknowledge the possibility of an opposing interpretation; therefore they do not seem to create the formal features of the text in different ways. Iser suggests that an interpretation emerges from the interaction between the text and the reader when the reader determines the implications of the text and in this way fills the "gaps" in the text. Iser believes that the text guides the reader, but as he also believes that meaning is on a level beyond words, he cannot explain how the text directs the reader. The similarity in the interpretations and the fact that the agreement is strongest when related to issues that are discussed broadly in the text do, however, support his assumption that readers are guided by the text. In Bleich's view, all interpretations have personal motives and each person has an idiosyncratic language system. The situation where a person learns a word determines the most important meaning it has for that person. In order to uncover the personal etymologies of words, Bleich asks his readers to associate freely on the basis of a text and note down all the personal memories and feelings that the reading experience evokes. Bleich's theory of the idiosyncratic language system seems to rely on a misconceived notion of the role that ostensive definitions have in language use. The readers' responses show that spontaneous associations to personal life seem to colour the readers' interpretations, but such instances are rather rare. According to Holland, an interpretation reflects the reader's identity theme. Language use is regulated by shared rules, but everyone follows the rules in his or her own way. Words mean different things to different people. The problem with this view is that if there is any basis for language use, it seems to be the shared way of following linguistic rules. Wittgenstein suggests that our understanding of words is related to the shared ways of using words and our understanding of human behaviour. This view seems to give better grounds for understanding similarity and differences in literary interpretations than the theories of Holland, Bleich, Fish and Iser.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis addresses the problem of Finnish Iron Age bells, pellet bells and bell pendants, previously unexplored musical artefacts from 400–1300 AD. The study, which contributes to the field of music archaeology, aims to provide a gateway to ancient soundworlds and ideas of music making. The research questions include: Where did these metal artefacts come from? How did they sound? How were they used? What did their sound mean to the people of the Iron Age? The data collected at the National Museum of Finland and at several provincial museums covers a total of 486 bells, pellet bells and bell pendants. By means of a cluster analysis, each category was divided into several subgroups. The subgroups, which all seem to have a different dating and geographical distribution, represent a spread of both local and international manufacturing traditions. According to an elemental analysis, the material varies from iron to copper-tin, copper-lead and copper-tin-lead alloys. Clappers, pellets and pebbles prove that the bells and pellet bells were indisputably instruments intended for sound production. Clusters of small bell pendants, however, probably produced sound by jingling against each other. Spectrogram plots reveal that the partials of the still audible sounds range from 1 000 to 19 850 Hz. On the basis of 129 inhumation graves, hoards, barrows and stray finds, it seems evident that the bells, pellet bells and bell pendants were fastened to dresses and horse harnesses or carried in pouches and boxes. The resulting acoustic spaces could have been employed in constructing social hierarchies, since the instruments usually appear in richly furnished graves. Furthermore, the instruments repeatedly occur with crosses, edge tools and zoomorphic pendants that in the later Finnish-Karelian culture were regarded as prophylactic amulets. In the Iron Age as well as in later folk culture, the bell sounds seem to have expressed territorial, social and cosmological boundaries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines the leadership skills in municipal organisation. The study reflects the manager views on leadership skills required. The purpose of this study was to reflect the most important leadership skills currently and in the future as well as the control of these skills. The study also examines the importance of the change and development needs of the leadership skills. In addition, the effect of background variables on evaluation of leadership skills were also examined. The quantitative research method was used in the study. The material was collected with the structured questionnaire from 324 Kotka city managers. SPSS-program was used to analyse the study material. Factor analysis was used as the main method for analysis. In addition, mean and standard deviations were used to better reflect the study results. Based on the study results, the most important leadership skills currently and in the future are associated with internet skills, work control, problem solving and human resource management skills. Managers expected the importance of leadership skills to grow in the future. Main growth is associated with the software utilisation, language skills, communication skills as well as financial leadership skills. Strongest competence according to managers is associated with the internet skills. Managers also considered to control well the skills related to employee know-how and manager networking. In addition, significant development needs are required in leadership skills. Main improvement areas were discovered in software utilisation, work control, human resource management skills as well as skills requiring problem solving. It should be noted that the main improvement areas appeared in the leadership skills that were evaluated as most important apart from software utilisation. Position, municipal segments and sex were observed to explain most of the deviation in received responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This academic work begins with a compact presentation of the general background to the study, which also includes an autobiography for the interest in this research. The presentation provides readers who know little of the topic of this research and of the structure of the educational system as well as of the value given to education in Nigeria. It further concentrates on the dynamic interplay of the effect of academic and professional qualification and teachers' job effectiveness in secondary schools in Nigeria in particular, and in Africa in general. The aim of this study is to produce a systematic analysis and rich theoretical and empirical description of teachers' teaching competencies. The theoretical part comprises a comprehensive literature review that focuses on research conducted in the areas of academic and professional qualification and teachers' job effectiveness, teaching competencies, and the role of teacher education with particular emphasis on school effectiveness and improvement. This research benefits greatly from the functionalist conception of education, which is built upon two emphases: the application of the scientific method to the objective social world, and the use of an analogy between the individual 'organism' and 'society'. To this end, it offers us an opportunity to define terms systematically and to view problems as always being interrelated with other components of society. The empirical part involves describing and interpreting what educational objectives can be achieved with the help of teachers' teaching competencies in close connection to educational planning, teacher training and development, and achieving them without waste. The data used in this study were collected between 2002 and 2003 from teachers, principals, supervisors of education from the Ministry of Education and Post Primary Schools Board in the Rivers State of Nigeria (N=300). The data were collected from interviews, documents, observation, and questionnaires and were analyzed using both qualitative and quantitative methods to strengthen the validity of the findings. The data collected were analyzed to answer the specific research questions and hypotheses posited in this study. The data analysis involved the use of multiple statistical procedures: Percentages Mean Point Value, T-test of Significance, One-Way Analysis of Variance (ANOVA), and Cross Tabulation. The results obtained from the data analysis show that teachers require professional knowledge and professional teaching skills, as well as a broad base of general knowledge (e.g., morality, service, cultural capital, institutional survey). Above all, in order to carry out instructional processes effectively, teachers should be both academically and professionally trained. This study revealed that teachers are not however expected to have an extraordinary memory, but rather looked upon as persons capable of thinking in the right direction. This study may provide a solution to the problem of teacher education and school effectiveness in Nigeria. For this reason, I offer this treatise to anyone seriously committed in improving schools in developing countries in general and in Nigeria in particular to improve the lives of all its citizens. In particular, I write this to encourage educational planners, education policy makers, curriculum developers, principals, teachers, and students of education interested in empirical information and methods to conceptualize the issue this study has raised and to provide them with useful suggestions to help them improve secondary schooling in Nigeria. Though, multiple audiences exist for any text. For this reason, I trust that the academic community will find this piece of work a useful addition to the existing literature on school effectiveness and school improvement. Through integrating concepts from a number of disciplines, I aim to describe as holistic a representation as space could allow of the components of school effectiveness and quality improvement. A new perspective on teachers' professional competencies, which not only take into consideration the unique characteristics of the variables used in this study, but also recommend their environmental and cultural derivation. In addition, researchers should focus their attention on the ways in which both professional and non-professional teachers construct and apply their methodological competencies, such as their grouping procedures and behaviors to the schooling of students. Keywords: Professional Training, Academic Training, Professionally Qualified, Academically Qualified, Professional Qualification, Academic Qualification, Job Effectiveness, Job Efficiency, Educational Planning, Teacher Training and Development, Nigeria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Type 2 diabetes is an increasing, serious, and costly public health problem. The increase in the prevalence of the disease can mainly be attributed to changing lifestyles leading to physical inactivity, overweight, and obesity. These lifestyle-related risk factors offer also a possibility for preventive interventions. Until recently, proper evidence regarding the prevention of type 2 diabetes has been virtually missing. To be cost-effective, intensive interventions to prevent type 2 diabetes should be directed to people at an increased risk of the disease. The aim of this series of studies was to investigate whether type 2 diabetes can be prevented by lifestyle intervention in high-risk individuals, and to develop a practical method to identify individuals who are at high risk of type 2 diabetes and would benefit from such an intervention. To study the effect of lifestyle intervention on diabetes risk, we recruited 522 volunteer, middle-aged (aged 40 - 64 at baseline), overweight (body mass index > 25 kg/m2) men (n = 172) and women (n = 350) with impaired glucose tolerance to the Diabetes Prevention Study (DPS). The participants were randomly allocated either to the intensive lifestyle intervention group or the control group. The control group received general dietary and exercise advice at baseline, and had annual physician's examination. The participants in the intervention group received, in addition, individualised dietary counselling by a nutritionist. They were also offered circuit-type resistance training sessions and were advised to increase overall physical activity. The intervention goals were to reduce body weight (5% or more reduction from baseline weight), limit dietary fat (< 30% of total energy consumed) and saturated fat (< 10% of total energy consumed), and to increase dietary fibre intake (15 g / 1000 kcal or more) and physical activity (≥ 30 minutes/day). Diabetes status was assessed annually by a repeated 75 g oral glucose tolerance testing. First analysis on end-points was completed after a mean follow-up of 3.2 years, and the intervention phase was terminated after a mean duration of 3.9 years. After that, the study participants continued to visit the study clinics for the annual examinations, for a mean of 3 years. The intervention group showed significantly greater improvement in each intervention goal. After 1 and 3 years, mean weight reductions were 4.5 and 3.5 kg in the intervention group and 1.0 kg and 0.9 kg in the control group. Cardiovascular risk factors improved more in the intervention group. After a mean follow-up of 3.2 years, the risk of diabetes was reduced by 58% in the intervention group compared with the control group. The reduction in the incidence of diabetes was directly associated with achieved lifestyle goals. Furthermore, those who consumed moderate-fat, high-fibre diet achieved the largest weight reduction and, even after adjustment for weight reduction, the lowest diabetes risk during the intervention period. After discontinuation of the counselling, the differences in lifestyle variables between the groups still remained favourable for the intervention group. During the post-intervention follow-up period of 3 years, the risk of diabetes was still 36% lower among the former intervention group participants, compared with the former control group participants. To develop a simple screening tool to identify individuals who are at high risk of type 2 diabetes, follow-up data of two population-based cohorts of 35-64 year old men and women was used. The National FINRISK Study 1987 cohort (model development data) included 4435 subjects, with 182 new drug-treated cases of diabetes identified during ten years, and the FINRISK Study 1992 cohort (model validation data) included 4615 subjects, with 67 new cases of drug-treated diabetes during five years, ascertained using the Social Insurance Institution's Drug register. Baseline age, body mass index, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity and daily consumption of fruits, berries or vegetables were selected into the risk score as categorical variables. In the 1987 cohort the optimal cut-off point of the risk score identified 78% of those who got diabetes during the follow-up (= sensitivity of the test) and 77% of those who remained free of diabetes (= specificity of the test). In the 1992 cohort the risk score performed equally well. The final Finnish Diabetes Risk Score (FINDRISC) form includes, in addition to the predictors of the model, a question about family history of diabetes and the age category of over 64 years. When applied to the DPS population, the baseline FINDRISC value was associated with diabetes risk among the control group participants only, indicating that the intensive lifestyle intervention given to the intervention group participants abolished the diabetes risk associated with baseline risk factors. In conclusion, the intensive lifestyle intervention produced long-term beneficial changes in diet, physical activity, body weight, and cardiovascular risk factors, and reduced diabetes risk. Furthermore, the effects of the intervention were sustained after the intervention was discontinued. The FINDRISC proved to be a simple, fast, inexpensive, non-invasive, and reliable tool to identify individuals at high risk of type 2 diabetes. The use of FINDRISC to identify high-risk subjects, followed by lifestyle intervention, provides a feasible scheme in preventing type 2 diabetes, which could be implemented in the primary health care system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prescribing for older patients is challenging. The prevalence of diseases increases with advancing age and causes extensive drug use. Impairments in cognitive, sensory, social and physical functioning, multimorbidity and comorbidities, as well as age-related changes in pharmacokinetics and pharmacodynamics all add to the complexity of prescribing. This study is a cross-sectional assessment of all long-term residents aged ≥ 65 years in all nursing homes in Helsinki, Finland. The residents’ health status was assessed and data on their demographic factors, health and medications were collected from their medical records in February 2003. This study assesses some essential issues in prescribing for older people: psychotropic drugs (Paper I), laxatives (Paper II), vitamin D and calcium supplements (Paper III), potentially inappropriate drugs for older adults (PIDs) and drug-drug interactions (DDIs)(Paper IV), as well as prescribing in public and private nursing homes. A resident was classified as a medication user if his or her medication record indicated a regular sequence for its dosage. Others were classified as non-users. Mini Nutritional Assessment (MNA) was used to assess residents’ nutritional status, Beers 2003 criteria to assess the use of PIDs, and the Swedish, Finnish, INteraction X-referencing database (SFINX) to evaluate their exposure to DDIs. Of all nursing home residents in Helsinki, 82% (n=1987) participated in studies I, II, and IV and 87% (n=2114) participated in the study III. The residents’ mean age was 84 years, 81% were female, and 70% were diagnosed with dementia. The mean number of drugs was 7.9 per resident; 40% of the residents used ≥ 9 drugs per day, and were thus exposed to polypharmacy. Eighty percent of the residents received psychotropics; 43% received antipsychotics, and 45% used antidepressants. Anxiolytics were prescribed to 26%, and hypnotics to 28% of the residents. Of those residents diagnosed with dementia, 11% received antidementia drugs. Fifty five percent of the residents used laxatives regularly. In multivariate analysis, those factors associated with regular laxative use were advanced age, immobility, poor nutritional status, chewing problems, Parkinson’s disease, and a high number of drugs. Eating snacks between meals was associated with lower risk for laxative use. Of all participants, 33% received vitamin D supplementation, 28% received calcium supplementation, and 20% received both vitamin D and calcium. The dosage of vitamin D was rather low: 21% received vitamin D 400 IU (10 µg) or more, and only 4% received 800 IU (20 µg) or more. In multivariate analysis, residents who received vitamin D supplementation enjoyed better nutritional status, ate snacks between meals, suffered no constipation, and received regular weight monitoring. Those residents receiving PIDs (34% of all residents) more often used psychotropic medication and were more often exposed to polypharmacy than residents receiving no PIDs. Residents receiving PIDs were less often diagnosed with dementia than were residents receiving no PIDs. The three most prevalent PIDs were short-acting benzodiazepine in greater dosages than recommended, hydroxyzine, and nitrofurantoin. These three drugs accounted for nearly 77% of all PID use. Of all residents, less than 5% were susceptible to a clinically significant DDI. The most common DDIs were related to the use of potassium-sparing diuretics, carbamazepine, and codeine. Residents exposed to potential DDIs were younger, had more often suffered a previous stroke, more often used psychotropics, and were more often exposed to PIDs and polypharmacy than were residents not exposed to DDIs. Residents in private nursing homes were less often exposed to polypharmacy than were residents in public nursing homes. Long-term residents in nursing homes in Helsinki use, on average, nearly eight drugs daily. The use of psychotropic drugs in our study was notably more common than in international studies. The prevalence of laxatives equaled other prior international studies. Regardless of the known benefit and recommendation of vitamin D supplementation for elderly residing mostly indoors, the proportion of nursing home residents receiving vitamin D and calcium was surprisingly low. The use of PIDs was common among nursing home residents. PIDs increased the likelihood of DDIs. However, DDIs did not seem a major concern among the nursing home population. Monitoring PIDs and potential drug interactions could improve the quality of prescribing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: Helicobacter pylori infection, although the prevalence is declining in Western world, is still responsible for several clinically important diseases. None of the diagnostic tests is perfect and in this study, the performance of three stool antigen tests was assessed. In areas of high H. pylori prevalence, the definition of patients with the greatest benefit from eradication therapy may be a problem; the role of duodenal gastric metaplasia in categorizing patients at risk for duodenal ulcer was evaluated in this respect. Whether persistent chronic inflammation and elevated H. pylori antibodies after successful eradication are associated with each other or with atrophic gastritis, a long term sequelae of H. pylori infection, were also studied. Patients and methods: The three stool antigen tests were assessed in pre- and post-eradication settings among 364 subjects in two studies as compared to the rapid urease test (RUT), histology, culture, the 13C-urea breath test (UBT) and enzyme immunoassay (EIA) based H. pylori serology. The association between duodenal gastric metaplasia with duodenal ulcer was evaluated in a retrospective study including 1054 patients gastroscopied due to clinical indications and 154 patients previously operated for duodenal ulcer. The extent of duodenal gastric metaplasia was assessed from histological specimens in different patient groups formed on the basis of gastroscopy findings and H. pylori infection. Chronic gastric inflammation (108 patients) and H. pylori antibodies and serum markers for atrophy (77 patients) were assessed in patients earlier treated for H. pylori. Results: Of the stool antigen tests studied, the monoclonal antibody-based EIA-test showed the highest sensitivity and specificity both in the pre-treatment setting (96.9% and 95.9%) and after therapy (96.9% and 97.8%). The polyclonal stool antigen test and the in-office test had at baseline a sensitivity of 91% and 94%, and a specificity of 96% and 89%, respectively and in a post-treatment setting, a sensitivity of 78% and 91%, and a specificity of 97%, respectively. Duodenal gastric metaplasia was strongly associated with H. pylori positive duodenal ulcer (odds ratio 42). Although common still five years after eradication, persistent chronic gastric inflammation (21%) and elevated H. pylori antibodies (33%) were neither associated with each other nor with atrophic gastritis. Conclusions: Current H. pylori infection can feasibly be diagnosed by a monoclonal antibody-based EIA test with the accuracy comparable to that of reference methods. The performance of the polyclonal test as compared to the monoclonal test was inferior especially in the post-treatment setting. The in-office test had a low specificity for primary diagnosis and hence positive test results should probably be confirmed with another test before eradication therapy is prescribed. The presence of widespread duodenal gastric metaplasia showed promising results in detecting patients who should be treated for H. pylori due to an increased risk of duodenal ulcer. If serology is used later on in patients with earlier successfully treated for H. pylori, it should be taken into account that H. pylori antibodies may persist elevated for years for unknown reason. However, this phenomenon was not found to be associated with persistent chronic inflammation or atrophic changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scots pine (Pinus sylvestris L.) and Norway spruce (Picea abies (L.) Karst.) forests dominate in Finnish Lapland. The need to study the effect of both soil factors and site preparation on the performance of planted Scots pine has increased due to the problems encountered in reforestation, especially on mesic and moist, formerly spruce-dominated sites. The present thesis examines soil hydrological properties and conditions, and effect of site preparation on them on 10 pine- and 10 spruce-dominated upland forest sites. Finally, the effects of both the site preparation and reforestation methods, and soil hydrology on the long-term performance of planted Scots pine are summarized. The results showed that pine and spruce sites differ significantly in their soil physical properties. Under field capacity or wetter soil moisture conditions, planted pines presumably suffer from excessive soil water and poor soil aeration on most of the originally spruce sites, but not on the pine sites. The results also suggested that site preparation affects the soil-water regime and thus prerequisites for forest growth over two decades after site preparation. High variation in the survival and mean height of planted pine was found. The study suggested that on spruce sites, pine survival is the lowest on sites that dry out slowly after rainfall events, and that height growth is the fastest on soils that reach favourable aeration conditions for root growth soon after saturation, and/or where the average air-filled porosity near field capacity is large enough for good root growth. Survival, but not mean height can be enhanced by employing intensive site preparation methods on spruce sites. On coarser-textured pine sites, site preparation methods don t affect survival, but methods affecting soil fertility, such as prescribed burning and ploughing, seem to enhance the height growth of planted Scots pines over several decades. The use of soil water content in situ as the sole criterion for sites suitable for pine reforestation was tested and found to be a relatively uncertain parameter. The thesis identified new potential soil variables, which should be tested using other data in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.