266 resultados para 1995_03161505 TM-6 4500404
Resumo:
Tässä väitöskirjassa perehdytään magneettisen rekonnektion ilmenemismuotoihin ja vaikutuksiin Maan magnetosfäärissä. Keskeisenä tutkimusvälineenä käytetään magnetohydrodynaamista (MHD) Gumics-magnetosfäärisimulaatiota. Työssä kehitetään myös uusia menetelmiä simulaatiossa ilmenevän rekonnektion tunnistamiseksi ja mittaamiseksi. MHD-simulaatio sopii suuren mittakaavan ilmiöiden tarkasteluun, joten kuvaa rekonnektiosta täydennetään pienen mittakaavan piirteiden osalta Cluster-satelliittien avulla. Tärkein tutkimuksen tuoma edistysaskel menetelmien saralla on rekonnektioviivan paikallistaminen topologisesti erityyppisten magneettikenttäviivojen alueiden liitoskohdassa olevana erottajaviivana neljän kentän tienoon menetelmää käyttäen. Tämä topologinen lähestymistapa on hyödyllinen erityisesti magnetopausilla, jonka monimutkainen geometria tekee magneettikentän paikalliseen käyttäytymiseen perustuvien rekonnektioviivan etsintätapojen soveltamisen hankalaksi. Topologisesti määritelty rekonnektioviiva on myös helppo tunnistaa magnetosfäärin globaalin konvektion solmukohdaksi. Magnetopausin rekonnektioviivan käyttäytyminen Gumicsissa noudattaa komponenttirekonnektio-olettamaan pohjautuvia teoreettisia ennusteita. Rekonnektion kvantitatiivinen tarkastelu Gumics-simulaatiossa perustuu energian muuntumiseen, joka lasketaan Poyntingin vektorin divergenssinä tai Poyntingin vuona valitun umpinaisen pinnan läpi. Rekonnektioon liittyvän energian muuntumisen jakautumista magnetopausilla tarkastellaan energian muuntumisen pintatiheyden avulla ja rekonnektion kokonaismäärää rekonnektiotehon avulla. Magnetopausin ja pyrstön rekonnektiotehot ovat simulaatiossa samaa suuruusluokkaa. Tärkeimmät magnetopausin rekonnektiotehoa säätelevät parametrit ovat aurinkotuulen nopeus ja aurinkotuulen magneettikentän suunta. Magnetopausin rekonnektio puolestaan säätelee energian ja aineen pääsyä magnetosfääriin, joskaan magnetopausin läpäisevät vuot eivät ole aivan suoraan verrannollisia rekonnektiotehoon. Pyrstön rekonnektioteho sen sijaan on suoraan verrannollinen magnetopausilta tulevaan energiavuohon; pyrstörekonnektio Gumicsissa on siis ulkoista pakotetta seuraava passiivinen energian käsittelijä. Simulaation tuottama rekonnektio on realistinen magnetosfäärin globaalissa mittakaavassa tarkasteltuna, mutta satelliittihavainnot paljastavat rekonnektiosta simulaation erottelukykyä pienimittakaavaisempia piirteitä. Havaintopuolella tämän väitöstutkimuksen tärkein löytö on protonien diffuusioalueen rakenteeseen kuuluvien Hallin kenttien kääntyminen pyrstön virtalevyn aaltoilun mukana.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.
Resumo:
Silicon particle detectors are used in several applications and will clearly require better hardness against particle radiation in the future large scale experiments than can be provided today. To achieve this goal, more irradiation studies with defect generating bombarding particles are needed. Protons can be considered as important bombarding species, although neutrons and electrons are perhaps the most widely used particles in such irradiation studies. Protons provide unique possibilities, as their defect production rates are clearly higher than those of neutrons and electrons, and, their damage creation in silicon is most similar to the that of pions. This thesis explores the development and testing of an irradiation facility that provides the cooling of the detector and on-line electrical characterisation, such as current-voltage (IV) and capacitance-voltage (CV) measurements. This irradiation facility, which employs a 5-MV tandem accelerator, appears to function well, but some disadvantageous limitations are related to MeV-proton irradiation of silicon particle detectors. Typically, detectors are in non-operational mode during irradiation (i.e., without the applied bias voltage). However, in real experiments the detectors are biased; the ionising proton generates electron-hole pairs, and a rise in rate of proton flux may cause the detector to breakdown. This limits the proton flux for the irradiation of biased detectors. In this work, it is shown that, if detectors are irradiated and kept operational, the electric field decreases the introduction rate of negative space-charges and current-related damage. The effects of various particles with different energies are scaled to each others by the non-ionising energy loss (NIEL) hypothesis. The type of defects induced by irradiation depends on the energy used, and this thesis also discusses the minimum proton energy required at which the NIEL-scaling is valid.
Resumo:
Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.
Resumo:
This work is focused on the effects of energetic particle precipitation of solar or magnetospheric origin on the polar middle atmosphere. The energetic charged particles have access to the atmosphere in the polar areas, where they are guided by the Earth's magnetic field. The particles penetrate down to 20-100 km altitudes (stratosphere and mesosphere) ionising the ambient air. This ionisation leads to production of odd nitrogen (NOx) and odd hydrogen species, which take part in catalytic ozone destruction. NOx has a very long chemical lifetime during polar night conditions. Therefore NOx produced at high altitudes during polar night can be transported to lower stratospheric altitudes. Particular emphasis in this work is in the use of both space and ground based observations: ozone and NO2 measurements from the GOMOS instrument on board the European Space Agency's Envisat-satellite are used together with subionospheric VLF radio wave observations from ground stations. Combining the two observation techniques enabled detection of NOx enhancements throughout the middle atmosphere, including tracking the descent of NOx enhancements of high altitude origin down to the stratosphere. GOMOS observations of the large Solar Proton Events of October-November 2003 showed the progression of the SPE initiated NOx enhancements through the polar winter. In the upper stratosphere, nighttime NO2 increased by an order of magnitude, and the effect was observed to last for several weeks after the SPEs. Ozone decreases up to 60 % from the pre-SPE values were observed in the upper stratosphere nearly a month after the events. Over several weeks the GOMOS observations showed the gradual descent of the NOx enhancements to lower altitudes. Measurements from years 2002-2006 were used to study polar winter NOx increases and their connection to energetic particle precipitation. NOx enhancements were found to occur in a good correlation with both increased high-energy particle precipitation and increased geomagnetic activity. The average wintertime polar NOx was found to have a nearly linear relationship with the average wintertime geomagnetic activity. The results from this thesis work show how important energetic particle precipitation from outside the atmosphere is as a source of NOx in the middle atmosphere, and thus its importance to the chemical balance of the atmosphere.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.
Resumo:
Varttuminen vietnamilaisena Suomessa: 12 vuoden seurantajakso – Vietnamilaisten hyvinvointi ja sosiokulttuurinen sopeutuminen lapsena/nuorena sekä nuorena aikuisena Tämä tutkimus oli määrällinen pitkittäistutkimus lapsena tai nuorena vuosina 1979-1991 Suomeen saapuneiden vietnamilaisten akkulturaatiosta (kulttuurin muutoksista), psyykkisestä hyvinvoinnista ja sosiokulttuurisesta sopeutumisesta. Tutkimukseen osallistui ensimmäisessä vaiheessa (vuonna 1992) 97 satunnaisesti valittua vietnamilaista peruskoululaista ympäri maata, joita verrattin suomalaisiin luokkatovereihin. Seurantavaiheeseen (vuonna 2004) osallistui 59 ensimmäisessä vaiheessa mukana ollutta vietnamilaista, nyt iältään 20 – 31 -vuotiaita. Tutkimuksen tavoitteena oli selvittää mitkä tekijät ennustivat akkulturaation lopputuloksia, samalla huomioiden iän ja ympäristön (kontekstin) vaikutukset psyykkiseen hyvinvointiin ja sosiokulttuuriseen sopeutumiseen. Yksittäiset akkulturaatiodimensiot (kieli, arvot ja identiteetti) osoittautuivat tärkeämmiksi psyykkiselle hyvinvoinnille ja sosiokulttuuriselle sopeutumiselle kuin etniset, kansalliset tai kaksikulttuuriset profiilit, joissa yhdistyivät ao. kieli, arvot ja identiteetti. Identiteettimuutosta tapahtui (etniseen) vietnamilaiseen suuntaan ajan kuluessa, kun taas arvomuutosta tapahtui (kansalliseen) suomalaiseen suuntaan. Sekä suomen että vietnamin kielen taito lisääntyivät ajan myötä, millä oli myönteisiä vaikutuksia sekä psyykkiseen hyvinvointiin että sosiokulttuuriseen sopeutumiseen. Lähtötilanteen psyykkinen hyvinvointi ennusti hyvinvointia (masennuksen puutetta ja itsetuntoa) aikuisena, mutta sosiokulttuurinen sopeutuminen (koulumenestys) lapsena tai nuorena ei ennustanut kouluttautumista aikuisena. Parempi suomen kielen taito ja vähemmän identifioitumista suomalaiseksi aikuisena sekä masentuneisuuden puute ja vähemmän koettua syrjintää lapsena tai nuorena erottelivat psyykkisesti paremmin voivat aikuiset (ei-masentuneet) heistä, jotka olivat masentuneita. Parempaa kouluttautumista aikuisena ennustivat toisaalta vähemmän koettua syrjintää lapsena tai nuorena ja toisaalta aikuisena parempi suomen kielen taito, suurempi kansallisten (suomalaisten) itsenäisyysarvojen kannattaminen, mutta kuitenkin vähemmän identifioitumista suomalaisiin. Koetun syrjinnän merkitys psyykkiselle hyvinvoinnille, erityisesti lapsena tai nuorena, sekä sen pitkäaikaisvaikutukset psyykkiselle hyvinvoinnille ja sosiokulttuuriselle sopeutumiselle aikuisena osoittavat tarpeen puuttua varhain psyykkisiin ongelmiin sekä parantaa etnisten ryhmien välisiä suhteita. Avainsanat: akkulturaatio, psyykkinen hyvinvointi, sosiokultuurinen sopeutuminen, kieli, arvot, identiteetti, vietnamilainen, Suomi, lapset, nuoret, nuoret aikuiset
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.
Resumo:
In the post-World War II era human rights have emerged as an enormous global phenomenon. In Finland human rights have particularly in the 1990s moved from the periphery to the center of public policy making and political rhetoric. Human rights education is commonly viewed as the decisive vehicle for emancipating individuals of oppressive societal structures and rendering them conscious of the equal value of others; both core ideals of the abstract discourse. Yet little empirical research has been conducted on how these goals are realized in practice. These factors provide the background for the present study which, by combining anthropological insights with critical legal theory, has analyzed the educational activities of a Scandinavian and Nordic network of human rights experts and PhD students in 2002-2005. This material has been complemented by data from the proceedings of UN human rights treaty bodies, hearings organized by the Finnish Foreign Ministry, the analysis of different human rights documents as well as the manner human rights are talked of in the Finnish media. As the human rights phenomenon has expanded, human rights experts have acquired widespread societal influence. The content of human rights remains, nevertheless, ambiguous: on the one hand they are law, on the other, part of a moral discourse. By educating laymen on what human rights are, experts act both as intermediaries and activists who expand the scope of rights and simultaneously exert increasing political influence. In the educational activities of the analyzed network these roles were visible in the rhetorics of legality and legitimacy . Among experts both of these rhetorics are subject to ongoing professional controversy, yet in the network they are presented as undisputable facts. This contributes to the impression that human rights knowledge is uncontested. This study demonstrates how the network s activities embody and strengthen a conception of expertise as located in specific, structurally determined individuals. Simultaneously its conception of learning emphasizes the adoption of knowledge by students, emphasizing the power of experts over them. The majority of the network s experts are Nordic males, whereas its students are predominantly Nordic females and males from East-European and developing countries. Contrary to the ideals of the discourse the network s activities do not create dialogue, but instead repeat power structures which are themselves problematic.
Resumo:
"Body and Iron: Essays on the Socialness of Objects" focuses on the bodily-material interaction of human subjects and technical objects. It poses a question, how is it possible that objects have an impact on their human users and examines the preconditions of active efficacy of objects. In this theoretical task the work relies on various discussions drawing from realistic ontology, phenomenology of body, neurophysiology of Antonio Damasio and psychoanalysis to establish both objects and bodies as material entities related in a causal interaction with each other. Out of material interaction emerge a symbolic field, psyche and culture that produce representations of interactions with material world they remain dependent on and conditioned by. Interaction with objects informs the human body via its somatosensory systems: interoseptive and proprioseptive (or kinesthetic) systems provide information to central nervous system of the internal state of the body and muscle tensions and motor activity of the limbs. Capability to control the movements of one's body by the internal "feel" of being a body turns out to be a precondition to the ability to control artificial extensions of the body. Motor activity of the body is involved in every perception of environment as the feel of one's own body is constitutive of any perception of external objects. Perception of an object cause changes in the internal milieu of the body and these changes in the organism form a bodily representation of an external object. Via these "muscle images" the subject can develop a feel for an instrument. Bodily feel for an object is pre-conceptual, practical knowledge that resists articulation but allows sensing the world through the object. This is what I would call sensual knowledge. Technical objects intervene between body and environment, transforming the relation of perception and motor activity. Once connected to a vehicle, human subject has to calibrate visual information of his or her position and movement in space to the bodily actions controlling the machine. It is the machine that mediates the relation of human actions to the relation of her body to its environment. Learning to use the machine necessarily means adjusting his or her bodily actions to the responses of the machine in relation to environmental changes it causes. Responsiveness of the machine to human touch "teaches" its subject by providing feedback of the "correctitude" of his or her bodily actions. Correct actions form a body technique of handling the object. This is the way of socialness of objects. While responding to human actions they generate their subjects. Learning to handle a machine means accepting the position of the user in the program of action materialized in the construction of the object. Objects mediate, channel and transform the relation of the body to its environment and via environment to the body itself according to their material and technical construction. Objects are sensory media: they channel signals and information from the environment thus constituting a representation of environment, a virtual or artificial reality. They also feed the body directly with their powers equipping their user with means of regulating somatic and psychic states of her self. For these reasons humans look for the company of objects. Keywords: material objects, material culture, sociology of technology, sociology of body, mobility, driving