25 resultados para Physics and Astronomy(all)
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Quality inspection and assurance is a veryimportant step when today's products are sold to markets. As products are produced in vast quantities, the interest to automate quality inspection tasks has increased correspondingly. Quality inspection tasks usuallyrequire the detection of deficiencies, defined as irregularities in this thesis. Objects containing regular patterns appear quite frequently on certain industries and science, e.g. half-tone raster patterns in the printing industry, crystal lattice structures in solid state physics and solder joints and components in the electronics industry. In this thesis, the problem of regular patterns and irregularities is described in analytical form and three different detection methods are proposed. All the methods are based on characteristics of Fourier transform to represent regular information compactly. Fourier transform enables the separation of regular and irregular parts of an image but the three methods presented are shown to differ in generality and computational complexity. Need to detect fine and sparse details is common in quality inspection tasks, e.g., locating smallfractures in components in the electronics industry or detecting tearing from paper samples in the printing industry. In this thesis, a general definition of such details is given by defining sufficient statistical properties in the histogram domain. The analytical definition allowsa quantitative comparison of methods designed for detail detection. Based on the definition, the utilisation of existing thresholding methodsis shown to be well motivated. Comparison of thresholding methods shows that minimum error thresholding outperforms other standard methods. The results are successfully applied to a paper printability and runnability inspection setup. Missing dots from a repeating raster pattern are detected from Heliotest strips and small surface defects from IGT picking papers.
Resumo:
The aim of this study is to analyse the content of the interdisciplinary conversations in Göttingen between 1949 and 1961. The task is to compare models for describing reality presented by quantum physicists and theologians. Descriptions of reality indifferent disciplines are conditioned by the development of the concept of reality in philosophy, physics and theology. Our basic problem is stated in the question: How is it possible for the intramental image to match the external object?Cartesian knowledge presupposes clear and distinct ideas in the mind prior to observation resulting in a true correspondence between the observed object and the cogitative observing subject. The Kantian synthesis between rationalism and empiricism emphasises an extended character of representation. The human mind is not a passive receiver of external information, but is actively construing intramental representations of external reality in the epistemological process. Heidegger's aim was to reach a more primordial mode of understanding reality than what is possible in the Cartesian Subject-Object distinction. In Heidegger's philosophy, ontology as being-in-the-world is prior to knowledge concerning being. Ontology can be grasped only in the totality of being (Dasein), not only as an object of reflection and perception. According to Bohr, quantum mechanics introduces an irreducible loss in representation, which classically understood is a deficiency in knowledge. The conflicting aspects (particle and wave pictures) in our comprehension of physical reality, cannot be completely accommodated into an entire and coherent model of reality. What Bohr rejects is not realism, but the classical Einsteinian version of it. By the use of complementary descriptions, Bohr tries to save a fundamentally realistic position. The fundamental question in Barthian theology is the problem of God as an object of theological discourse. Dialectics is Barth¿s way to express knowledge of God avoiding a speculative theology and a human-centred religious self-consciousness. In Barthian theology, the human capacity for knowledge, independently of revelation, is insufficient to comprehend the being of God. Our knowledge of God is real knowledge in revelation and our words are made to correspond with the divine reality in an analogy of faith. The point of the Bultmannian demythologising programme was to claim the real existence of God beyond our faculties. We cannot simply define God as a human ideal of existence or a focus of values. The theological programme of Bultmann emphasised the notion that we can talk meaningfully of God only insofar as we have existential experience of his intervention. Common to all these twentieth century philosophical, physical and theological positions, is a form of anti-Cartesianism. Consequently, in regard to their epistemology, they can be labelled antirealist. This common insight also made it possible to find a common meeting point between the different disciplines. In this study, the different standpoints from all three areas and the conversations in Göttingen are analysed in the frameworkof realism/antirealism. One of the first tasks in the Göttingen conversations was to analyse the nature of the likeness between the complementary structures inquantum physics introduced by Niels Bohr and the dialectical forms in the Barthian doctrine of God. The reaction against epistemological Cartesianism, metaphysics of substance and deterministic description of reality was the common point of departure for theologians and physicists in the Göttingen discussions. In his complementarity, Bohr anticipated the crossing of traditional epistemic boundaries and the generalisation of epistemological strategies by introducing interpretative procedures across various disciplines.
Resumo:
Tämän tutkielman tavoitteena on selvittää Venäjän, Slovakian, Tsekin, Romanian, Bulgarian, Unkarin ja Puolan osakemarkkinoiden heikkojen ehtojen tehokkuutta. Tämä tutkielma on kvantitatiivinen tutkimus ja päiväkohtaiset indeksin sulkemisarvot kerättiin Datastreamin tietokannasta. Data kerättiin pörssien ensimmäisestä kaupankäyntipäivästä aina vuoden 2006 elokuun loppuun saakka. Analysoinnin tehostamiseksi dataa tutkittiin koko aineistolla, sekä kahdella aliperiodilla. Osakemarkkinoiden tehokkuutta on testattu neljällä tilastollisella metodilla, mukaan lukien autokorrelaatiotesti ja epäparametrinen runs-testi. Tavoitteena on myös selvittääesiintyykö kyseisillä markkinoilla viikonpäiväanomalia. Viikonpäiväanomalian esiintymistä tutkitaan käyttämällä pienimmän neliösumman menetelmää (OLS). Viikonpäiväanomalia on löydettävissä kaikilta edellä mainituilta osakemarkkinoilta paitsi Tsekin markkinoilta. Merkittävää, positiivista tai negatiivista autokorrelaatiota, on löydettävissä kaikilta osakemarkkinoilta, myös Ljung-Box testi osoittaa kaikkien markkinoiden tehottomuutta täydellä periodilla. Osakemarkkinoiden satunnaiskulku hylätään runs-testin perusteella kaikilta muilta paitsi Slovakian osakemarkkinoilla, ainakin tarkastellessa koko aineistoa tai ensimmäistä aliperiodia. Aineisto ei myöskään ole normaalijakautunut minkään indeksin tai aikajakson kohdalla. Nämä havainnot osoittavat, että kyseessä olevat markkinat eivät ole heikkojen ehtojen mukaan tehokkaita
Resumo:
The objective of my thesis is to assess mechanisms of ecological community control in macroalgal communities in the Baltic Sea. In the top-down model, predatory fish feed on invertebrate mesograzers, releasing algae partly from grazing pressure. Such a reciprocal relationship is called trophic cascade. In the bottom-up model, nutrients increase biomass in the food chain. The nutrients are first assimilated by algae and, via food chain, increase also abundance of grazers and predators. Previous studies on oceanic shores have described these two regulative mechanisms in the grazer - alga link, but how they interact in the trophic cascades from fish to algae is still inadequately known. Because the top-down and bottom-up mechanisms are predicted to depend on environmental disturbances, such as wave stress and light, I have studied these models at two distinct water depths. There are five factorial field experiments behind the thesis, which were all conducted in the Finnish Archipelago Sea. In all the experiments, I studied macroalgal colonization - either density, filament length or biomass - on submerged colonization substrates. By excluding predatory fish and mesograzers from the algal communities, the studies compared the strength of the top-down control to natural algal communities. A part of the experimental units were, in addition, exposed to enriched nitrogen and phosphorus concentrations, which enabled testing of bottom-up control. These two models of community control were further investigated in shallow (<1 m) and deep (ca. 3 m) water. Moreover, the control mechanisms were also expected to depend on grazer species. Therefore different grazer species were enclosed into experimental units and their impacts on macroalgal communities were followed specifically. The community control in the Baltic rocky shores was found to follow theoretical predictions, which have not been confirmed by field studies before. Predatory fish limited grazing impact, which was seen as denser algal communities and longer algal filaments. Nutrient enrichment increased density and filament length of annual algae and, thus, changed the species composition of the algal community. The perennial alga Fucus vesiculosusA and the red alga Ceramium tenuicorne suffered from the increased nutrient availabilities. The enriched nutrient conditions led to denser grazer fauna, thereby causing strong top-down control over both the annual and perennial macroalgae. The strength of the top-down control seemed to depend on the density and diversity of grazers and predators as well as on the species composition of macroalgal assemblages. The nutrient enrichment led to, however, weaker limiting impact of predatory fish on grazer fauna, because fish stocks did not respond as quickly to enhanced resources in the environment as the invertebrate fauna. According to environmental stress model, environmental disturbances weaken the top-down control. For example, on a wave-exposed shore, wave stress causes more stress to animals close to the surface than deeper on the shore. Mesograzers were efficient consumers at both the depths, while predation by fish was weaker in shallow water. Thus, the results supported the environmental stress model, which predicts that environmental disturbance affects stronger the higher a species is in the food chain. This thesis assessed the mechanisms of community control in three-level food chains and did not take into account higher predators. Such predators in the Baltic Sea are, for example, cormorant, seals, white-tailed sea eagle, cod and salmon. All these predatory species were recently or are currently under intensive fishing, hunting and persecution, and their stocks have only recently increased in the region. Therefore, it is possible that future densities of top predators may yet alter the strengths of the controlling mechanisms in the Baltic littoral zone.
Resumo:
The purpose of this study was to investigate the effects of information and communication technology (ICT) on school from teachers’ and students’ perspectives. The focus was on three main subject matters: on ICT use and competence, on teacher and school community, and on learning environment and teaching practices. The study is closely connected to the national educational policy which has aimed strongly at supporting the implementation of ICT in pedagogical practices at all institutional levels. The phenomena were investigated using a mixed methods approach. The qualitative data from three cases studies and the quantitative data from three statistical studies were combined. In this study, mixed methods were used to investigate the complex phenomena from various stakeholders’ points of view, and to support validation by combining different perspectives in order to give a fuller and more complete picture of the phenomena. The data were used in a complementary manner. The results indicate that the technical resources for using ICT both at school and at homes are very good. In general, students are capable and motivated users of new technology; these skills and attitudes are mainly based on home resources and leisuretime use. Students have the skills to use new kinds of applications and new forms of technology, and their ICT skills are wide, although not necessarily adequate; the working habits might be ineffective and even wrong. Some students have a special kind of ICT-related adaptive expertise which develops in a beneficial interaction between school guidance and challenges, and individual interest and activity. Teachers’ skills are more heterogeneous. The large majority of teachers have sufficient skills for everyday and routine working practices, but many of them still have difficulties in finding a meaningful pedagogical use for technology. The intensive case study indicated that for the majority of teachers the intensive ICT projects offer a possibility for learning new skills and competences intertwined in the work, often also supported by external experts and a collaborative teacher community; a possibility that “ordinary” teachers usually do not have. Further, teachers’ good ICT competence help them to adopt new pedagogical practices and integrate ICT in a meaningful way. The genders differ in their use of and skills in ICT: males show better skills especially in purely technical issues also in schools and classrooms, whereas female students and younger female teachers use ICT in their ordinary practices quite naturally. With time, the technology has become less technical and its communication and creation affordances have become stronger, easier to use, more popular and motivating, all of which has increased female interest in the technology. There is a generation gap in ICT use and competence between teachers and students. This is apparent especially in the ICT-related pedagogical practices in the majority of schools. The new digital affordances not only replace some previous practices; the new functionalities change many of our existing conceptions, values, attitudes and practices. The very different conceptions that generations have about technology leads, in the worst case, to a digital gap in education; the technology used in school is boring and ineffective compared to the ICT use outside school, and it does not provide the competence needed for using advanced technology in learning. The results indicate that in schools which have special ICT projects (“ICT pilot schools”) for improving pedagogy, these have led to true changes in teaching practices. Many teachers adopted student-centred and collaborative, inquiry-oriented teaching practices as well as practices that supported students' authentic activities, independent work, knowledge building, and students' responsibility. This is, indeed, strongly dependent on the ICT-related pedagogical competence of the teacher. However, the daily practices of some teachers still reflected a rather traditional teacher-centred approach. As a matter of fact, very few teachers ever represented solely, e.g. the knowledge building approach; teachers used various approaches or mixed them, based on the situation, teaching and learning goals, and on their pedagogical and technical competence. In general, changes towards pedagogical improvements even in wellorganised developmental projects are slow. As a result, there are two kinds of ICT stories: successful “ICT pilot schools” with pedagogical innovations related to ICT and with school community level agreement about the visions and aims, and “ordinary schools”, which have no particular interest in or external support for using ICT for improvement, and in which ICT is used in a more routine way, and as a tool for individual teachers, not for the school community.
Resumo:
Aims: This study was carried out to investigate the role of common liver function tests, and the degree of common bile duct dilatation in the differential diagnosis of extrahepatic cholestasis, as well as the occurrence, diagnosis and treatment of iatrogenic bile duct injuries. In bile duct injuries, special attention was paid to gender and severity distribution and long-term results. Patients and methods: All consecutive patients with diagnosed common bile duct stones or malignant strictures in ERCP between August 2000 and November 2003. Common liver function tests were measured in the morning before ERCP on all of these 212 patients, and their common bile duct diameter was measured from ERCP films. Between January 1995 and April 2002, 3736 laparoscopic cholecystectomies were performed and a total of 32 bile duct injuries were diagnosed. All pre-, per-, and postoperative data were collected retrospectively; and the patients were also interviewed by phone. Results: Plasma bilirubin proved to be the best discriminator between CBD stones and malignant strictures (p≤0.001 compared to other liver function tests and degree of common bile duct dilatation). The same effect was seen in Receiver Operating Characteristics curves (AUC 0.867). With a plasma bilirubin cut-off value of 145 μmol/l, four out of five patients could be classified correctly. The degree of common bile duct dilatation proved to be worthless in differential diagnostics. After laparoscopic cholecystectomy the total risk for bile duct injury was 0.86%, including cystic duct leaks. 86% of severe injuries and 88% of injuries requiring operative treatment were diagnosed in females. All the cystic duct leakages and 87% of the strictures were treated endoscopically. Good long-term results were seen in 84% of the whole study population. Conclusions: Plasma bilirubin is the most effective liver function test in differential diagnosis between CBD stones and malignant strictures. The only value of common bile duct dilatation is its ability to verify the presence of extrahepatic cholestasis. Female gender was associated with higher number of iatrogenic bile duct injuries, and in particular, most of the major complications occur in females. Most of the cystic duct leaks and common bile duct strictures can be treated endoscopically. The long-term results in our institution are at an internationally acceptable level.
Resumo:
Chlorambucil is an anticancer agent used in the treatment of a variety of cancers, especially in chronic lymphocytic leukemia, and autoimmune diseases. Nevertheless, chlorambucil is potentially mutagenic, teratogenic and carcinogenic. The high antitumor activity and high toxicity of chlorambucil and its main metabolite, phenylacetic acid mustard, to normal tissues have been known for a long time. Despite this, no detailed chemical data on their reactions with biomolecules in aqueous media have been available. The aim of the work described in this thesis was to analyze reactions of chlorambucil with 2’-deoxyribonucleosides and calf thymus DNA in aqueous buffered solution, at physiological pH, and to identify and characterize all adducts by using modern analyzing methods. Our research was also focused on the reactions of phenylacetic acid mustard with 2’-deoxynucleosides under similar conditions. A review of the literature consisting of general background of nucleic acids, alkylating agents and ultraviolet spectroscopy used to identify the purine and pyrimidine nucleosides, as well as the results from experimental work are presented and discussed in this doctoral thesis.
Resumo:
The purpose of this study was to evaluate the effect of the birth hospital and the time of birth on mortality and the long-term outcome of Finnish very low birth weight (VLBW) or very low gestational age (VLGA) infants. This study included all Finnish VLBW/VLGA infants born at <32 gestational weeks or with a birth weight of ≤1500g, and controls born full-term and healthy. In the first part of the study, the mortality of VLBW/VLGA infants born in 2000–2003 was studied. The second part of the study consisted of a five-year follow-up of VLBW/VLGA infants born in 2001–2002. The study was performed using data from parental questionnaires and several registers. The one-year mortality rate was 11% for live-born VLBW/VLGA infants, 22% for live-born and stillborn VLBW/VLGA infants, and 0% for the controls. In live-born and in all (including stillbirths) VLBW/VLGA infants, the adjusted mortality was lower among those born in level III hospitals compared with level II hospitals. Mortality rates of live-born VLBW/VLGA infants differed according to the university hospital district where the birth hospital was located, but there were no differences in mortality between the districts when stillborn infants were included. There was a trend towards lower mortality rates in VLBW/VLGA infants born during office hours compared with those born outside office hours (night time, weekends, and public holidays). When stillborn infants were included, this difference according to the time of birth was significant. Among five-year-old VLBW/VLGA children, morbidity, use of health care resources, and problems in behaviour and development were more common in comparison with the controls. The health-related quality of life of the surviving VLBW/VLGA children was good but, statistically, it was significantly lower than among the controls. The median and the mean number of quality-adjusted life-years were 4.6 and 3.6 out of a maximum five years for all VLBW/VLGA children. For the controls, the median was 4.8 and the mean was 4.9. Morbidity rates, the use of health care resources, and the mean quality-adjusted life-years differed for VLBW/VLGA children according to the university hospital district of birth. However, the time of birth, the birth hospital level or university hospital district were not associated with the health-related quality of life, nor with behavioural and developmental scores of the survivors at the age of five years. In conclusion, the decreased mortality in level III hospitals was not gained at the expense of long-term problems. The results indicate that VLBW/VLGA deliveries should be centralized to level III hospitals and the regional differences in the treatment practices should further be clarified. A long-term follow-up on the outcome of VLBW/VLGA infants is important in order to recognize the critical periods of care and to optimise the care. In the future, quality-adjusted life-years can be used as a uniform measure for comparing the effectiveness of care between VLBW/VLGA infants and different patient groups
Resumo:
The goal of the study was to analyse orthodontic care in Finnish health centres with special reference to the delivery, outcome and costs of treatment. Public orthodontic care was studied by two questionnaires sent to the chief dental officers of all health centres (n = 276) and to all specialist orthodontists in Finland (n = 146). The large regional variation was mentioned by the orthodontists as the most important factor requiring improvement. Orthodontic practices and outcome were studied in eight Finnish municipal health centres representing early and late timing of treatment. A random sample of 16- and 18-year-olds (n = 1109) living in these municipalities was examined for acceptability of occlusion with the Occlusal Morphology and Function Index (OMFI). In acceptability of occlusion, only minor differences were found between the two timing groups. The percentage of subjects with acceptable morphology was higher among untreated than among treated adolescents. The costs of orthodontic care were estimated among the adolescents with a treatment history. The mean appliance costs were higher in the late, and the mean visit costs higher in the early timing group. The cost-effectiveness of orthodontic services differed among the health centres, but was almost equal in the two timing groups. National guidelines and delegation of orthodontic tasks were suggested as the tools for reducing the variation among the health centres. In the eight health centres, considerable variation was found in acceptability of occlusion and in cost-effectiveness of services. The cost-effectiveness was not directly connected with the timing of treatment.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
Female sexual dysfunctions, including desire, arousal, orgasm and pain problems, have been shown to be highly prevalent among women around the world. The etiology of these dysfunctions is unclear but associations with health, age, psychological problems, and relationship factors have been identified. Genetic effects explain individual variation in orgasm function to some extent but until now quantitative behavior genetic analyses have not been applied to other sexual functions. In addition, behavior genetics can be applied to exploring the cause of any observed comorbidity between the dysfunctions. Discovering more about the etiology of the dysfunctions may further improve the classification systems which are currently under intense debate. The aims of the present thesis were to evaluate the psychometric properties of a Finnish-language version of a commonly used questionnaire for measuring female sexual function, the Female Sexual Function Index (FSFI), in order to investigate prevalence, comorbidity, and classification, and to explore the balance of genetic and environmental factors in the etiology as well as the associations of a number of biopsychosocial factors with female sexual functions. Female sexual functions were studied through survey methods in a population based sample of Finnish twins and their female siblings. There were two waves of data collection. The first data collection targeted 5,000 female twins aged 33–43 years and the second 7,680 female twins aged 18–33 and their over 18–year-old female siblings (n = 3,983). There was no overlap between the data collections. The combined overall response rate for both data collections was 53% (n = 8,868), with a better response rate in the second (57%) compared to the first (45%). In order to measure female sexual function, the FSFI was used. It includes 19 items which measure female sexual function during the previous four weeks in six subdomains; desire, subjective arousal, lubrication, orgasm, sexual satisfaction, and pain. In line with earlier research in clinical populations, a six factor solution of the Finnish-language version of the FSFI received supported. The internal consistencies of the scales were good to excellent. Some questions about how to avoid overestimating the prevalence of extreme dysfunctions due to women being allocated the score of zero if they had had no sexual activity during the preceding four weeks were raised. The prevalence of female sexual dysfunctions per se ranged from 11% for lubrication dysfunction to 55% for desire dysfunction. The prevalence rates for sexual dysfunction with concomitant sexual distress, in other words, sexual disorders were notably lower ranging from 7% for lubrication disorder to 23% for desire disorder. The comorbidity between the dysfunctions was substantial most notably between arousal and lubrication dysfunction even if these two dysfunctions showed distinct patterns of associations with the other dysfunctions. Genetic influences on individual variation in the six subdomains of FSFI were modest but significant ranging from 3–11% for additive genetic effects and 5–18% for nonadditive genetic effects. The rest of the variation in sexual functions was explained by nonshared environmental influences. A correlated factor model, including additive and nonadditive genetic effects and nonshared environmental effects had the best fit. All in all, every correlation between the genetic factors was significant except between lubrication and pain. All correlations between the nonshared environment factors were significant showing that there is a substantial overlap in genetic and nonshared environmental influences between the dysfunctions. In general, psychological problems, poor satisfaction with the relationship, sexual distress, and poor partner compatibility were associated with more sexual dysfunctions. Age was confounded with relationship length but had over and above relationship length a negative effect on desire and sexual satisfaction and a positive effect on orgasm and pain functions. Alcohol consumption in general was associated with better desire, arousal, lubrication, and orgasm function. Women pregnant with their first child had fewer pain problems than nulliparous nonpregnant women. Multiparous pregnant women had more orgasm problems compared to multiparous nonpregnant women. Having children was associated with less orgasm and pain problems. The conclusions were that desire, subjective arousal, lubrication, orgasm, sexual satisfaction, and pain are separate entities that have distinct associations with a number of different biopsychosocial factors. However, there is also considerable comorbidity between the dysfunctions which are explained by overlap in additive genetic, nonadditive genetic and nonshared environmental influences. Sexual dysfunctions are highly prevalent and are not always associated with sexual distress and this relationship might be moderated by a good relationship and compatibility with partner. Regarding classification, the results supports separate diagnoses for subjective arousal and genital arousal as well as the inclusion of pain under sexual dysfunctions.
Resumo:
The flow of information within modern information society has increased rapidly over the last decade. The major part of this information flow relies on the individual’s abilities to handle text or speech input. For the majority of us it presents no problems, but there are some individuals who would benefit from other means of conveying information, e.g. signed information flow. During the last decades the new results from various disciplines have all suggested towards the common background and processing for sign and speech and this was one of the key issues that I wanted to investigate further in this thesis. The basis of this thesis is firmly within speech research and that is why I wanted to design analogous test batteries for widely used speech perception tests for signers – to find out whether the results for signers would be the same as in speakers’ perception tests. One of the key findings within biology – and more precisely its effects on speech and communication research – is the mirror neuron system. That finding has enabled us to form new theories about evolution of communication, and it all seems to converge on the hypothesis that all communication has a common core within humans. In this thesis speech and sign are discussed as equal and analogical counterparts of communication and all research methods used in speech are modified for sign. Both speech and sign are thus investigated using similar test batteries. Furthermore, both production and perception of speech and sign are studied separately. An additional framework for studying production is given by gesture research using cry sounds. Results of cry sound research are then compared to results from children acquiring sign language. These results show that individuality manifests itself from very early on in human development. Articulation in adults, both in speech and sign, is studied from two perspectives: normal production and re-learning production when the apparatus has been changed. Normal production is studied both in speech and sign and the effects of changed articulation are studied with regards to speech. Both these studies are done by using carrier sentences. Furthermore, sign production is studied giving the informants possibility for spontaneous speech. The production data from the signing informants is also used as the basis for input in the sign synthesis stimuli used in sign perception test battery. Speech and sign perception were studied using the informants’ answers to questions using forced choice in identification and discrimination tasks. These answers were then compared across language modalities. Three different informant groups participated in the sign perception tests: native signers, sign language interpreters and Finnish adults with no knowledge of any signed language. This gave a chance to investigate which of the characteristics found in the results were due to the language per se and which were due to the changes in modality itself. As the analogous test batteries yielded similar results over different informant groups, some common threads of results could be observed. Starting from very early on in acquiring speech and sign the results were highly individual. However, the results were the same within one individual when the same test was repeated. This individuality of results represented along same patterns across different language modalities and - in some occasions - across language groups. As both modalities yield similar answers to analogous study questions, this has lead us to providing methods for basic input for sign language applications, i.e. signing avatars. This has also given us answers to questions on precision of the animation and intelligibility for the users – what are the parameters that govern intelligibility of synthesised speech or sign and how precise must the animation or synthetic speech be in order for it to be intelligible. The results also give additional support to the well-known fact that intelligibility in fact is not the same as naturalness. In some cases, as shown within the sign perception test battery design, naturalness decreases intelligibility. This also has to be taken into consideration when designing applications. All in all, results from each of the test batteries, be they for signers or speakers, yield strikingly similar patterns, which would indicate yet further support for the common core for all human communication. Thus, we can modify and deepen the phonetic framework models for human communication based on the knowledge obtained from the results of the test batteries within this thesis.
Resumo:
For the past two decades the music digitalization has been considered the most significant phenomenon in the music industry as the physical sales have been decreasing rapidly. The advancement of the digital technology and the internet have facilitated the digitalization in the music industry and affected all stages of the music value chain, namely music creation, distribution and consumption. The newly created consumer culture has led to the establishment of novel business models such as music subscriptions and à-la-carte downloads websites and live streaming. The dynamic digital environment has presented the music industry stakeholders with the challenge to adapt to the requirements of the constantly changing modern consumers’ needs and demands. The purpose of this study was to identify how music digitalization can influence change in the Finnish music industry value chain; i.e. how digitalization affects the music industry stakeholders, their functions and inter-relatedness and how the stakeholders are able to react to the changes in the industry. The study was conducted as a qualitative research based entirely on primary data in the form of semi-structured interviews with experts from different units of the Finnish music industry value chain. Since the study offers assessment of diverse viewpoints on the value chain, it further provides an integrated picture of the Finnish music industry current situation and its competitive environment. The results suggest that the music industry is currently in a turbulent stage of experimentation with new business models and digital innovations. However, at this point it is impossible to determine which business model will be approved by the consumers in the longer run. Nevertheless, the study confirmed the claim that consumption of music in its digital form is to become dominant over the traditional physical copies sales in the nearest future. As a result the music industry is becoming more user-oriented; that is the focus is shifting from music production towards artist branding and management and visibility to the audience. Furthermore, the music industry is undergoing the process of integration with other industries such as media, social networks, internet services providers and mobile phone manufacturers in order to better fulfill the consumers’ needs. The previously underrated live music and merchandising are also increasing their significance for the revenues in the stagnant music markets. Therefore, the music industry is developing at present towards becoming an integrated entertainment industry deeply penetrating every point of modern people’s leisure activities.
Resumo:
Purpose of the study is to evaluate performance of active portfolio management and the effect of stock market trend on the performance. Theory of efficient markets states that market prices reflect all available information and that all investors share a common view of future price developments. This view gives little room for the success of active management, but the theory has been disputed – at least the level of efficiency. Behavioral finance has developed theories that identify irrational behavior patterns of investors. For example, investment decisions are not made independent of past market developments. These findings give reason to believe that also the performance of active portfolio management may depend on market developments. Performance of 16 Finnish equity funds is evaluated during the period of 2005 to 2011. In addition two sub periods are constructed, a bull market period and a bear market period. The sub periods are created by joining together the two bull market phases and the two bear market phases of the whole period. This allows for the comparison of the two different market states. Performance of the funds is measured with risk-adjusted performance by Modigliani and Modigliani (1997), abnormal return over the CAPM by Jensen (1968), and market timing by Henriksson and Merton (1981). The results suggested that in average the funds are not able to outperform the market portfolio. However, the underperformance was found to be lower than the management fees in average which suggests that portfolio managers are able to do successful investment decisions to some extent. The study revealed substantial dependence on the market trend for all of the measures. The risk-adjusted performance measure suggested that in bear markets active portfolio managers in average are able to beat the market portfolio but not in bull markets. Jensen´s alpha and the market timing model also showed striking differences between the two market states. The results of these two measures were, however, somewhat problematic and reliable conclusions about the performance could not be drawn.
Resumo:
Few people see both opportunities and threats coming from IT legacy in current world. On one hand, effective legacy management can bring substantial hard savings and smooth transition to the desired future state. On the other hand, its mismanagement contributes to serious operational business risks, as old systems are not as reliable as it is required by the business users. This thesis offers one perspective of dealing with IT legacy – through effective contract management, as a component towards achieving Procurement Excellence in IT, thus bridging IT delivery departments, IT procurement, business units, and suppliers. It developed a model for assessing the impact of improvements on contract management process and set of tools and advices with regards to analysis and improvement actions. The thesis conducted case study to present and justify the implementation of Lean Six Sigma in IT legacy contract management environment. Lean Six Sigma proved to be successful and this thesis presents and discusses all the steps necessary, and pitfalls to avoid, to achieve breakthrough improvement in IT contract management process performance. For the IT legacy contract management process two improvements require special attention and can be easily copied to any organization. First is the issue of diluted contract ownership that stops all the improvements, as people do not know who is responsible for performing those actions. Second is the contract management performance evaluation tool, which can be used for monitoring, identifying outlying contracts and opportunities for improvements in the process. The study resulted in a valuable insight on the benefits of applying Lean Six Sigma to improve IT legacy contract management, as well as on how Lean Six Sigma can be applied in IT environment. Managerial implications are discussed. It is concluded that the use of data-driven Lean Six Sigma methodology for improving the existing IT contract management processes is a significant addition to the existing best practices in contract management.