17 resultados para Idealized model for theory development

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whereas it has been widely assumed in the public that the Soviet music policy system had a “top-down” structure of control and command that directly affected musical creativity, in fact my research shows that the relations between the different levels of the music policy system were vague, and the viewpoints of its representatives differed from each other. Because the representatives of the party and government organs controlling operas could not define which kind of music represented Socialist Realism, the system as it developed during the 1930s and 1940s did not function effectively enough in order to create such a centralised control of Soviet music, still less could Soviet operas fulfil the highly ambiguous aesthetics of Socialist Realism. I show that musical discussions developed as bureaucratic ritualistic arenas, where it became more important to reveal the heretical composers, making scapegoats of them, and requiring them to perform self-criticism, than to give directions on how to reach the artistic goals of Socialist Realism. When one opera was found to be unacceptable, this lead to a strengthening of control by the party leadership, which lead to more operas, one after the other, to be revealed as failures. I have studied the control of the composition, staging and reception of the opera case-studies, which remain obscure in the West despite a growing scholarly interest in them, and have created a detailed picture of the foundation and development of the Soviet music control system in 1932-1950. My detailed discussion of such case-studies as Ivan Dzerzhinskii’s The Quiet Don, Dmitrii Shostakovich’s Lady Macbeth of Mtsensk District, Vano Muradeli’s The Great Friendship, Sergei Prokofiev’s Story of a Real Man, Tikhon Khrennikov’s Frol Skobeev and Evgenii Zhukovskii’s From All One’s Heart backs with documentary precision the historically revisionist model of the development of Soviet music. In February 1948, composers belonging to the elite of the Union of Soviet Composers, e.g. Dmitri Shostakovich and Sergei Prokofiev, were accused in a Central Committee Resolution of formalism, as been under the influence of western modernism. Accusations of formalism were connected to the criticism of the conciderable financial, material and social privileges these composers enjoyed in the leadership of the Union. With my new archival findings I give a more detailed picture of the financial background for the 1948 campaign. The independent position of the music funding organization of the Union of Soviet Composers (Muzfond) to decide on its finances was an exceptional phenomenon in the Soviet Union and contradicted the strivings to strengthen the control of Soviet music. The financial audits of the Union of Soviet Composers did not, however, change the elite status of some of its composers, except for maybe a short duration in some cases. At the same time the independence of the significal financial authorities of Soviet theatres was restricted. The cuts in the governmental funding allocated to Soviet theatres contradicted the intensified ideological demands for Soviet operas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis is positioned in the services marketing field. Previous mobile service research has identified perceived value or relative advantage as a stable predictor of use of services. However, a more detailed view of what customers value in mobile services is needed for marketing diverse types of mobile content and attracting committed customers. The direct relationships between multidimensional value and loyalty constructs have received limited attention in the previous literature, although a multidimensional view is needed for differentiating services. This thesis studies how perceived value of mobile service use affects customer commitment, repurchase intentions, word-of-mouth and willingness to pay. The doctoral thesis consists of three journal articles and one working paper. The four papers have different sub-aims and comprise individual empirical studies. Mixed methods including both personal interviews and survey data collected from end-users of different types of mobile content services are used. The conceptual mobile perceived value model that results from the first explorative empirical study supports a six- dimensional value view. The six dimensions are further categorized into two higher order constructs: content-related perceived value (emotional, social, convenience and monetary value) and context-related (epistemic and conditional value) perceived value. Structural equation modeling is used in the other three studies to validate this framework by analyzing the relationships between context- and content-related value, and how the individual perceived value dimensions affect commitment and behavioral outcomes. Analyzing the direct relationships revealed differences in the effect of perceived value dimensions between information and entertainment mobile service user groups, between effects on commitment, repurchase intentions and word-of-mouth intentions, as well as between effects on commitment to the provider and to the mobile channel as such. This thesis contributes to earlier perceived value literature by structuring the value dimensions into two groups. Most importantly, the thesis contributes to the value and loyalty literature by increasing understanding of how the different dimensions of perceived value directly affect commitment and post-purchase intentions. The results have implications for further theory development in the electronic services field using multidimensional latent constructs, and practical implications for enhancing commitment to content provider and for differentiated marketing strategies in the mobile field. The general conclusion of this thesis is that differentiated value-based marketing of mobile services is essential for attracting committed customers who will use the same providers’ content also in the future. Minna Pihlström is associated with the Centre for Relationship Marketing and Service Management (CERS) at Hanken.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the link between South-South remittance and development. It attempts to establish improved understanding about the role of immigrants as agents of constituency growth and development. By doing so, it illuminates the dark corners of the policy implications that the unconventional development agency of immigrants might have for countries in the Organization ft Economic Cooperation and Development (OECD). The thesis problematises the existence of state-centric international cooperation as providing the recipe for failed Aid in the face of global poverty menace. In the last half a century, the relative shi' of focus to non-state actors brought about the proliferation of NGOs. That, intrun, helped improve international access to crisis situations; however, their long-term remedial impacts on poverty and development have been contested. Major misgivings for non-governmental organizations (NGOs) are, on one hand, low level goal-bound expenditures and lack of independence from influence of the state, on the other. Therefore, the thesis enterprises to empirically verify its fundamental question whether remitting immigrants constitute an alternative development agency to the traditional players: the State and NGOs. Its main arguments are: due to state's failures in bringing sustainable development in many countries of the South, the future of poverty reduction and development also rests in immigrants' remittances. Nonetheless, in the last decade, remittance security-nexus dominated its discourse. Because of that remittance was viewed as something requiring global regime and restrictions. These temptations to tightly regulate remittance flows carry the danger of overlooking its trans-boundary nature and its strong link with livelihood of the poor. Therefore, to avoid unintended consequences of interventions, there need to be clear policy that bases itself on a discursive knowledge on the issues of North-South and South-South remittances The study involved both literature based and empirical research. It employed Discourse Analysis (C as main method for the former and snow-balling as its approach for the latter. For the first part the thesis constructed three conceptual models, these are: metrological model, police model and ecological model on remittance development-nexus. Through this modeling, the thesis achieved better deconstruction on the concepts remittance, immigrants and development agency. The protagonists of each model, the values and interests they represent, and their main arguments along various lines of dichotomies have been discussed. For instance, the main treats of meteorological model include: it sees remittance as transitional economic variable which require constant speculations and global management; it acts as meteorological station for following up or predicting the level, direction, flow and movement of global remittance. It focuses on official lines and considers the state as legitimate recipient of advic and positive consequence of remittance. On the other hand, police model views remittance as beir at best, development neutral or as an illicit activity requiring global regulations and tight control. Both immigrants and remittance viewed as subversive to establishments. It gives primacy to state stable agent of development and a partner for international cooperation. The anti-thesis to the police model is supplied by ecological model, which this thesis is a part. Ecological model on remittance and immigrants argues that, tight global regulations alone cannot be a panacea for possible abuse of informal remittance system. Ecological model, not only links remittance to poverty reduction, the main trust of development, but also considers the development agency of immigrants as critical factor for 21st century north-south development intervention. It sees immigrants as development conscious and their remittance instrument as most stable flow of finance to the developing countries. Besides, it sees remittance as effective poverty solutions than Foreign Direct Investment and international AID. This thesis focuses on the significance of South-South remittance and investigates the South Africa - Ethiopia remittance corridor, as case study; and empirically verifies the role of Ethiopian (Kembata and Hadiya) immigrants in South Africa as agents of local development back home. The study involved techniques of interview, group discussions, observations and investigative study. It also looked into the determinants of their migration to South Africa, and their remittance to Ethiopia. The theoretical models in the first part of the thesis have been operationalised throughout the empirical part to verify if the Kembata and Hadiya immigrants played the crucial role in their household poverty and local development in comparison with the Ethiopian state and the NGOs involved in the system. As evidenced by the research the thesis has made three distinct contributions to the discourse of remittance development-nexus. Fist, it systematized the debate about linkages between remittance, immigrants, development agency and policy of international cooperation by creating three conceptual models (school of thoughts); second, it singled out remitting immigrants as new agents of development in the South; third, it deconstructed concept of remittance and established South¬South remittance as additional sphere of academic investigation. In addition to the above contributions, the thesis finds that Kembata and Hadiya immigrants have engaged in various developmental activities in their locality than usually anticipated. Hence, it concludes that Ethiopian immigrants constitute an alternative development agency to the state and other non-state actors in their country, and the lesson can be applied to poverty reduction strategies in most developing countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This thesis studies homogeneous classes of complete metric spaces. Over the past few decades model theory has been extended to cover a variety of nonelementary frameworks. Shelah introduced the abstact elementary classes (AEC) in the 1980s as a common framework for the study of nonelementary classes. Another direction of extension has been the development of model theory for metric structures. This thesis takes a step in the direction of combining these two by introducing an AEC-like setting for studying metric structures. To find balance between generality and the possibility to develop stability theoretic tools, we work in a homogeneous context, thus extending the usual compact approach. The homogeneous context enables the application of stability theoretic tools developed in discrete homogeneous model theory. Using these we prove categoricity transfer theorems for homogeneous metric structures with respect to isometric isomorphisms. We also show how generalized isomorphisms can be added to the class, giving a model theoretic approach to, e.g., Banach space isomorphisms or operator approximations. The novelty is the built-in treatment of these generalized isomorphisms making, e.g., stability up to perturbation the natural stability notion. With respect to these generalized isomorphisms we develop a notion of independence. It behaves well already for structures which are omega-stable up to perturbation and coincides with the one from classical homogeneous model theory over saturated enough models. We also introduce a notion of isolation and prove dominance for it.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study examines boundaries in health care organizations. Boundaries are sometimes considered things to be avoided in everyday living. This study suggests that boundaries can be important temporally and spatially emerging locations of development, learning, and change in inter-organizational activity. Boundaries can act as mediators of cultural and social formations and practices. The data of the study was gathered in an intervention project during the years 2000-2002 in Helsinki in which the care of 26 patients with multiple and chronic illnesses was improved. The project used the Change Laboratory method that represents a research assisted method for developing work. The research questions of the study are: (1) What are the boundary dynamics of development, learning, and change in health care for patients with multiple and chronic illnesses? (2) How do individual patients experience boundaries in their health care? (3) How are the boundaries of health care constructed and reconstructed in social interaction? (4) What are the dynamics of boundary crossing in the experimentation with the new tools and new practice? The methodology of the study, the ethnography of the multi-organizational field of activity, draws on cultural-historical activity theory and anthropological methods. The ethnographic fieldwork involves multiple research techniques and a collaborative strategy for raising research data. The data of this study consists of observations, interviews, transcribed intervention sessions, and patients' health documents. According to the findings, the care of patients with multiple and chronic illnesses emerges as fragmented by divisions of a patient and professionals, specialties of medicine and levels of health care organization. These boundaries have a historical origin in the Finnish health care system. As an implication of these boundaries, patients frequently experience uncertainty and neglect in their care. However, the boundaries of a single patient were transformed in the Change Laboratory discussions among patients, professionals and researchers. In these discussions, the questioning of the prevailing boundaries was triggered by the observation of gaps in inter-organizational care. Transformation of the prevailing boundaries was achieved in implementation of the collaborative care agreement tool and the practice of negotiated care. However, the new tool and practice did not expand into general use during the project. The study identifies two complementary models for the development of health care organization in Finland. The 'care package model', which is based on productivity and process models adopted from engineering and the 'model of negotiated care', which is based on co-configuration and the public good.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the future the number of the disabled drivers requiring a special evaluation of their driving ability will increase due to the ageing population, as well as the progress of adaptive technology. This places pressure on the development of the driving evaluation system. Despite quite intensive research there is still no consensus concerning what is the factual situation in a driver evaluation (methodology), which measures should be included in an evaluation (methods), and how an evaluation has to be carried out (practise). In order to find answers to these questions we carried out empirical studies, and simultaneously elaborated upon a conceptual model for driving and a driving evaluation. The findings of empirical studies can be condensed into the following points: 1) A driving ability defined by the on-road driving test is associated with different laboratory measures depending on the study groups. Faults in the laboratory tests predicted faults in the on-road driving test in the novice group, whereas slowness in the laboratory predicted driving faults in the experienced drivers group. 2) The Parkinson study clearly showed that even an experienced clinician cannot reliably accomplish an evaluation of a disabled person’s driving ability without collaboration with other specialists. 3) The main finding of the stroke study was that the use of a multidisciplinary team as a source of information harmonises the specialists’ evaluations. 4) The patient studies demonstrated that the disabled persons themselves, as well as their spouses, are as a rule not reliable evaluators. 5) From the safety point of view, perceptible operations with the control devices are not crucial, but correct mental actions which the driver carries out with the help of the control devices are of greatest importance. 6) Personality factors including higher-order needs and motives, attitudes and a degree of self-awareness, particularly a sense of illness, are decisive when evaluating a disabled person’s driving ability. Personality is also the main source of resources concerning compensations for lower-order physical deficiencies and restrictions. From work with the conceptual model we drew the following methodological conclusions: First, the driver has to be considered as a holistic subject of the activity, as a multilevel hierarchically organised system of an organism, a temperament, an individuality, and a personality where the personality is the leading subsystem from the standpoint of safety. Second, driving as a human form of a sociopractical activity, is also a hierarchically organised dynamic system. Third, in an evaluation of driving ability it is a question of matching these two hierarchically organised structures: a subject of an activity and a proper activity. Fourth, an evaluation has to be person centred but not disease-, function- or method centred. On the basis of our study a multidisciplinary team (practitioner, driving school teacher, psychologist, occupational therapist) is recommended for use in demanding driver evaluations. Primary in a driver’s evaluations is a coherent conceptual model while concrete methods of evaluations may vary. However, the on-road test must always be performed if possible.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This dissertation examined the research-based teacher education at the University of Helsinki from different theoretical and practical perspectives. Five studies focused on these perspectives separately as well as overlappingly. Study I focused on the reflection process of graduating teacher students. The data consisted of essays the students wrote as their last assignment before graduating, where their assignment was to examine their development as researchers during their MA thesis research process. The results indicated that the teacher students had analysed their own development thoroughly during the process and that they had reflected on theoretical as well as practical educational matters. The results also pointed out that, in the students’ opinion, personally conducted research is a significant learning process. -- Study II investigated teacher students’ workplace learning and the integration of theory and practice in teacher education. The students’ interviews focused on their learning of teacher’s work prior to education. The interviewees’ responses concerning their ‘surviving’ in teaching prior to teacher education were categorized into three categories: learning through experiences, school as a teacher learning environment, and case-specific learning. The survey part of the study focused on integration of theory and practice within the education process. The results showed that the students who worked while they studied took advantage of the studies and applied them to work. They set more demanding teaching goals and reflected on their work more theoretically. -- Study III examined practical aspects of the teacher students’ MA thesis research as well as the integration of theory and practice in teacher education. The participants were surveyed using a web-based survey which dealt with the participants’ teacher education experiences. According to the results, most of the students had chosen a practical topic for their MA thesis, one arising from their work environment, and most had chosen a research topic that would develop their own teaching. The results showed that the integration of theory and practice had taken place in much of the course work, but most obviously in the practicum periods, and also in the courses concerning the school subjects. The majority felt that the education had in some way been successful with regards to integration. -- Study IV explored the idea of considering teacher students’ MA thesis research as professional development. Twenty-three teachers were interviewed on the subject of their experiences of conducting research about their own work as teachers. The results of the interviews showed that the reasons for choosing the MA thesis research topic were multiple: practical, theoretical, personal, professional reasons, as well as outside effect. The objectives of the MA thesis research, besides graduating, were actual projects, developing the ability to work as teachers, conducting significant research, and sharing knowledge of the topic. The results indicated that an MA thesis can function as a tool for professional development, for example in finding ways for adjusting teaching, increasing interaction skills, gaining knowledge or improving reflection on theory and/or practice, strengthening self-confidence as a teacher, increasing researching skills or academic writing skills, as well as becoming critical and being able to read scientific and academic literature. -- Study V analysed teachers’ views of the impact of practitioner research. According to the results, the interviewees considered the benefits of practitioner research to be many, affecting teachers, pupils, parents, the working community, and the wider society. Most of the teachers indicated that they intended to continue to conduct research in the future. The results also showed that teachers often reflected personally and collectively, and viewed this as important. -- These five studies point out that MA thesis research is and can be a useful tool for increasing reflection doing with personal and professional development, as well as integrating theory and practice. The studies suggest that more advantage could be taken of the MA thesis research project. More integration of working and studying could and should be made possible for teacher students. This could be done in various ways within teacher education, but the MA thesis should be seen as a pedagogical possibility.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The rupture of a cerebral artery aneurysm causes a devastating subarachnoid hemorrhage (SAH), with a mortality of almost 50% during the first month. Each year, 8-11/100 000 people suffer from aneurysmal SAH in Western countries, but the number is twice as high in Finland and Japan. The disease is most common among those of working age, the mean age at rupture being 50-55 years. Unruptured cerebral aneurysms are found in 2-6% of the population, but knowledge about the true risk of rupture is limited. The vast majority of aneurysms should be considered rupture-prone, and treatment for these patients is warranted. Both unruptured and ruptured aneurysms can be treated by either microsurgical clipping or endovascular embolization. In a standard microsurgical procedure, the neck of the aneurysm is closed by a metal clip, sealing off the aneurysm from the circulation. Endovascular embolization is performed by packing the aneurysm from the inside of the vessel lumen with detachable platinum coils. Coiling is associated with slightly lower morbidity and mortality than microsurgery, but the long-term results of microsurgically treated aneurysms are better. Endovascular treatment methods are constantly being developed further in order to achieve better long-term results. New coils and novel embolic agents need to be tested in a variety of animal models before they can be used in humans. In this study, we developed an experimental rat aneurysm model and showed its suitability for testing endovascular devices. We optimized noninvasive MRI sequences at 4.7 Tesla for follow-up of coiled experimental aneurysms and for volumetric measurement of aneurysm neck remnants. We used this model to compare platinum coils with polyglycolic-polylactic acid (PGLA) -coated coils, and showed the benefits of the latter in this model. The experimental aneurysm model and the imaging methods also gave insight into the mechanisms involved in aneurysm formation, and the model can be used in the development of novel imaging techniques. This model is affordable, easily reproducible, reliable, and suitable for MRI follow-up. It is also suitable for endovascular treatment, and it evades spontaneous occlusion.