46 resultados para Homogeneous phantom
Resumo:
Boron neutron capture therapy (BNCT) is a form of chemically targeted radiotherapy that utilises the high neutron capture cross-section of boron-10 isotope to achieve a preferential dose increase in the tumour. The BNCT dosimetry poses a special challenge as the radiation dose absorbed by the irradiated tissues consists of several dose different components. Dosimetry is important as the effect of the radiation on the tissue is correlated with the radiation dose. Consistent and reliable radiation dose delivery and dosimetry are thus basic requirements for radiotherapy. The international recommendations for are not directly applicable to BNCT dosimetry. The existing dosimetry guidance for BNCT provides recommendations but also calls for investigating for complementary methods for comparison and improved accuracy. In this thesis the quality assurance and stability measurements of the neutron beam monitors used in dose delivery are presented. The beam monitors were found not to be affected by the presence of a phantom in the beam and that the effect of the reactor core power distribution was less than 1%. The weekly stability test with activation detectors has been generally reproducible within the recommended tolerance value of 2%. An established toolkit for epithermal neutron beams for determination of the dose components is presented and applied in an international dosimetric intercomparison. The measured quantities (neutron flux, fast neutron and photon dose) by the groups in the intercomparison were generally in agreement within the stated uncertainties. However, the uncertainties were large, ranging from 3-30% (1 standard deviation), emphasising the importance of dosimetric intercomparisons if clinical data is to be compared between different centers. Measurements with the Exradin type 2M ionisation chamber have been repeated in the epithermal neutron beam in the same measurement configuration over the course of 10 years. The presented results exclude severe sensitivity changes to thermal neutrons that have been reported for this type of chamber. Microdosimetry and polymer gel dosimetry as complementary methods for epithermal neutron beam dosimetry are studied. For microdosimetry the comparison of results with ionisation chambers and computer simulation showed that the photon dose measured with microdosimetry was lower than with the two other methods. The disagreement was within the uncertainties. For neutron dose the simulation and microdosimetry results agreed within 10% while the ionisation chamber technique gave 10-30% lower neutron dose rates than the two other methods. The response of the BANG-3 gel was found to be linear for both photon and epithermal neutron beam irradiation. The dose distribution normalised to dose maximum measured by MAGIC polymer gel was found to agree well with the simulated result near the dose maximum while the spatial difference between measured and simulated 30% isodose line was more than 1 cm. In both the BANG-3 and MAGIC gel studies, the interpretation of the results was complicated by the presence of high-LET radiation.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Nucleation is the first step in the formation of a new phase inside a mother phase. Two main forms of nucleation can be distinguished. In homogeneous nucleation, the new phase is formed in a uniform substance. In heterogeneous nucleation, on the other hand, the new phase emerges on a pre-existing surface (nucleation site). Nucleation is the source of about 30% of all atmospheric aerosol which in turn has noticeable health effects and a significant impact on climate. Nucleation can be observed in the atmosphere, studied experimentally in the laboratory and is the subject of ongoing theoretical research. This thesis attempts to be a link between experiment and theory. By comparing simulation results to experimental data, the aim is to (i) better understand the experiments and (ii) determine where the theory needs improvement. Computational fluid dynamics (CFD) tools were used to simulate homogeneous onecomponent nucleation of n-alcohols in argon and helium as carrier gases, homogeneous nucleation in the water-sulfuric acid-system, and heterogeneous nucleation of water vapor on silver particles. In the nucleation of n-alcohols, vapor depletion, carrier gas effect and carrier gas pressure effect were evaluated, with a special focus on the pressure effect whose dependence on vapor and carrier gas properties could be specified. The investigation of nucleation in the water-sulfuric acid-system included a thorough analysis of the experimental setup, determining flow conditions, vapor losses, and nucleation zone. Experimental nucleation rates were compared to various theoretical approaches. We found that none of the considered theoretical descriptions of nucleation captured the role of water in the process at all relative humidities. Heterogeneous nucleation was studied in the activation of silver particles in a TSI 3785 particle counter which uses water as its working fluid. The role of the contact angle was investigated and the influence of incoming particle concentrations and homogeneous nucleation on counting efficiency determined.
Resumo:
Boron neutron capture therapy (BNCT) is a radiotherapy that has mainly been used to treat malignant brain tumours, melanomas, and head and neck cancer. In BNCT, the patient receives an intravenous infusion of a 10B-carrier, which accumulates in the tumour area. The tumour is irradiated with epithermal or thermal neutrons, which result in a boron neutron capture reaction that generates heavy particles to damage tumour cells. In Finland, boronophenylalanine fructose (BPA-F) is used as the 10B-carrier. Currently, the drifting of boron from blood to tumour as well as the spatial and temporal accumulation of boron in the brain, are not precisely known. Proton magnetic resonance spectroscopy (1H MRS) could be used for selective BPA-F detection and quantification as aromatic protons of BPA resonate in the spectrum region, which is clear of brain metabolite signals. This study, which included both phantom and in vivo studies, examined the validity of 1H MRS as a tool for BPA detection. In the phantom study, BPA quantification was studied at 1.5 and 3.0 T with single voxel 1H MRS, and at 1.5 T with magnetic resonance imaging (MRSI). The detection limit of BPA was determined in phantom conditions at 1.5 T and 3.0 T using single voxel 1H MRS, and at 1.5 T using MRSI. In phantom conditions, BPA quantification accuracy of ± 5% and ± 15% were achieved with single voxel MRS using external or internal (internal water signal) concentration references, respectively. For MRSI, a quantification accuracy of <5% was obtained using an internal concentration reference (creatine). The detection limits of BPA in phantom conditions for the PRESS sequence were 0.7 (3.0 T) and 1.4 mM (1.5 T) mM with 20 × 20 × 20 mm3 single voxel MRS, and 1.0 mM with acquisition-weighted MRSI (nominal voxel volume 10(RL) × 10(AP) × 7.5(SI) mm3), respectively. In the in vivo study, an MRSI or single voxel MRS or both was performed for ten patients (patients 1-10) on the day of BNCT. Three patients had glioblastoma multiforme (GBM), and five patients had a recurrent or progressing GBM or anaplastic astrocytoma gradus III, and two patients had head and neck cancer. For nine patients (patients 1-9), MRS/MRSI was performed 70-140 min after the second irradiation field, and for one patient (patient 10), the MRSI study began 11 min before the end of the BPA-F infusion and ended 6 min after the end of the infusion. In comparison, single voxel MRS was performed before BNCT, for two patients (patients 3 and 9), and for one patient (patient 9), MRSI was performed one month after treatment. For one patient (patient 10), MRSI was performed four days before infusion. Signals from the tumour spectrum aromatic region were detected on the day of BNCT in three patients, indicating that in favourable cases, it is possible to detect BPA in vivo in the patient’s brain after BNCT treatment or at the end of BPA-F infusion. However, because the shape and position of the detected signals did not exactly match the BPA spectrum detected in the in vitro conditions, assignment of BPA is difficult. The opportunity to perform MRS immediately after the end of BPA-F infusion for more patients is necessary to evaluate the suitability of 1H MRS for BPA detection or quantification for treatment planning purposes. However, it could be possible to use MRSI as criteria in selecting patients for BNCT.
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
The aims of the thesis are (1) to present a systematic evaluation of generation and its relevance as a sociological concept, (2) to reflect on how generational consciousness, i.e. generation as an object of collective identification that has social significance, can emerge and take shape, (3) to analyze empirically the generational experiences and consciousness of one specific generation, namely Finnish baby boomers (b. 1945 1950). The thesis contributes to the discussion on the social (as distinct from its genealogical) meaning of the concept of generation, launched by Karl Mannheim s classic Das Problem der Generationen (1928), in which the central idea is that a certain group of people is bonded together by a shared experience and that this bonding can result in a distinct self-consciousness. The thesis is comprised of six original articles and an extensive summarizing chapter. In the empirical articles, the baby boomers are studied on the basis of nationally representative survey data (N = 2628) and narrative life-story interviews (N = 38). In the article that discusses the connection of generations and social movements, the analysis is based on the member survey of Attac Finland (N = 1096). Three main themes were clarified in the thesis. (1) In the social sense the concept of generation is a modern, problematic, and ultimately a political concept. It served the interests of the intellectuals who developed the concept in the early 20th century and provided them, as an alternative to the concept of social class, a new way of think about social change and progress. The concept of generation is always coupled with the concept of Zeitgeist or some other controversial way of defining what is essential, i.e. what creates generations, in a given culture. Thus generation is, as a product of definition and classification struggles, a contested concept. The concept also clearly implies elitist connotations; the idea of some kind of vanguard (the elite) that represents an entire generation by proclaiming itself as its spokesman automatically creates a counterpart, namely the others in the peer group who are thought to be represented (the masses). (2) Generational consciousness cannot emerge as a result of any kind of automatic process or endogenously; it must be made. There has to be somebody who represents the generation in order for that generation to exist in people s minds and as an object of identification; generational experiences and their meanings must be articulated. Hence, social generations are, in a fundamental manner, discursively constructed. The articulations of generational experiences (speeches, writings, manifests, labels etc.) can be called as the discursive dimension of social generations, and through this notion, how public discourse shapes people s generational consciousness can be seen. Another important element in the process is collective memory, as generational consciousness often takes form only retrospectively. (3) Finnish baby boomers are not a united or homogeneous generation but are divided into many smaller sections with specific generational experiences and consciousnesses. The content of the generational consciousness of the baby boomers is heavily politically charged. A salient dividing line inside the age group is formed by individual attitudes towards so-called 1960s radicalism. Identification with the 1960s generation functions today as a positive self-definition of a certain small leftist elite group, and the values and characteristics usually connected with the idea of the 1960s generation do not represent the whole age group. On the contrary, among some of the members of the baby boomers, the generational identification is still directed by the experience of how traditional values were disgraced in the 1960s. As objects of identification, the neutral term baby boomers and the charged 1960s generation are totally different things, and therefore they should not be used as synonyms. Although the significance of the group of the 1960s generation is often overestimated, they are however special with respect to generational consciousness because they have presented themselves as the voice of the entire generation. Their generational interpretations have spread through the media with the help of certain iconic images of the generation insomuch that 1960s radicalism has become an indirect generational experience for other parts of the baby boom cohort as well.
Resumo:
Recently it has been recognized that evolutionary aspects play a major role in conservation issues of a species. In this thesis I have combined evolutionary research with conservation studies to provide new insight into these fields. The study object of this thesis is the house sparrow, a species that has features that makes it interesting for this type of study. The house sparrow has been ubiquitous almost all over the world. Even though being still abundant, several countries have reported major declines. These declines have taken place in a relatively short time covering both urban and rural habitats. In Finland this species has declined by more than two thirds in just over two decades. In addition, as the house sparrow lives only in human inhabited areas it can also raise public awareness to conservation issues. I used both an extensive museum collection of house sparrows collected in 1980s from all over Finland as well as samples collected in 2009 from 12 of the previously collected localities. I used molecular techniques to study neutral genetic variation within and genetic differentiation between the study populations. This knowledge I then combined with data gathered on morphometric measurements. In addition I analyzed eight heavy metals from the livers of house sparrows that lived in either rural or urban areas in the 1980s and evaluated the role of heavy metal pollution as a possible cause of the declines. Even though dispersal of house sparrows is limited I found that just as the declines started in 1980s the house sparrows formed a genetically panmictic population on the scale of the whole Finland. When compared to Norway, where neutral genetic divergence has been found even with small geographic distances, I concluded that this difference would be due to contrasting landscapes. In Finland the landscape is rather homogeneous facilitating the movements of these birds and maintaining gene flow even with the low dispersal. To see whether the declines have had an effect on the neutral genetic variation of the populations I did a comparison between the historical and contemporary genetic data. I showed that even though genetic diversity has not decreased due to the drastic declines the populations have indeed become more differentiated from each other. This shows that even in a still quite abundant species the declines can have an effect on the genetic variation. It is shown that genetic diversity and differentiation may approach their new equilibriums at different rates. This emphasizes the importance of studying both of them and if the latter has increased it should be taken as a warning sign of a possible loss of genetic diversity in the future. One of the factors suggested to be responsible for the house sparrow declines is heavy metal pollution. When studying the livers of house sparrows from 1980s I discovered higher levels of heavy metal concentrations in urban than rural habitats, but the levels of the metals were comparatively low and based on that heavy metal pollution does not seem to be a direct cause for the declines in Finland. However, heavy metals are known to decrease the amount of insects in urban areas and thus in the cities heavy metals may have an indirect effect on house sparrows. Although neutral genetic variation is an important tool for conservation genetics it does not tell the whole story. Since neutral genetic variation is not affected by selection, information can be one-sided. It is possible that even neutral genetic differentiation is low, there can be substantial variation in additive genetic traits indicating local adaptation. Therefore I performed a comparison between neutral genetic differentiation and phenotypic differentiation. I discovered that two traits out of seven are likely to be under directional selection, whereas the others could be affected by random genetic drift. Bergmann s rule may be behind the observed directional selection in wing length and body mass. These results highlight the importance of estimating both neutral and adaptive genetic variation.
Resumo:
We reformulate and extend our recently introduced quantum kinetic theory for interacting fermion and scalar fields. Our formalism is based on the coherent quasiparticle approximation (cQPA) where nonlocal coherence information is encoded in new spectral solutions at off-shell momenta. We derive explicit forms for the cQPA propagators in the homogeneous background and show that the collision integrals involving the new coherence propagators need to be resummed to all orders in gradient expansion. We perform this resummation and derive generalized momentum space Feynman rules including coherent propagators and modified vertex rules for a Yukawa interaction. As a result we are able to set up self-consistent quantum Boltzmann equations for both fermion and scalar fields. We present several examples of diagrammatic calculations and numerical applications including a simple toy model for coherent baryogenesis.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Many Finnish IT companies have gone through numerous organizational changes over the past decades. This book draws attention to how stability may be central to software product development experts and IT workers more generally, who continuously have to cope with such change in their workplaces. It does so by analyzing and theorizing change and stability as intertwined and co-existent, thus throwing light on how it is possible that, for example, even if ‘the walls fall down the blokes just code’ and maintain a sense of stability in their daily work. Rather than reproducing the picture of software product development as exciting cutting edge activities and organizational change as dramatic episodes, the study takes the reader beyond the myths surrounding these phenomena to the mundane practices, routines and organizings in product development during organizational change. An analysis of these ordinary practices offers insights into how software product development experts actively engage in constructing stability during organizational change through a variety of practices, including solidarity, homosociality, close relations to products, instrumental or functional views on products, preoccupations with certain tasks and humble obedience. Consequently, the study shows that it may be more appropriate to talk about varieties of stability, characterized by a multitude of practices of stabilizing rather than states of stagnation. Looking at different practices of stability in depth shows the creation of software as an arena for micro-politics, power relations and increasing pressures for order and formalization. The thesis gives particular attention to power relations and processes of positioning following organizational change: how social actors come to understand themselves in the context of ongoing organizational change, how they comply with and/or contest dominant meanings, how they identify and dis-identify with formalization, and how power relations often are reproduced despite dis-identification. Related to processes of positioning, the reader is also given a glimpse into what being at work in a male-dominated and relatively homogeneous work environment looks like. It shows how the strong presence of men or “blokes” of a particular age and education seems to become invisible in workplace talk that appears ‘non-conscious’ of gender.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
Previous research on Human Resource Management (HRM) has focused extensively on the potential relationships between the use of HRM practices and organizational performance. Extant research in HRM has been based on the underlying assumption that HRM practices can enhance organizational performance through their impact on positive employee attitudes and performance, that is, employee reactions to HRM. At the current state of research however, it remains unclear how employees come to perceive and react to HRM practices and to what extent employees in organizations, units and teams react to such practices in similar or widely different ways. In fact, recent HRM studies indicate that employee reactions to HRM may be far less homogeneous than assumed. This raises the question of whether or not the linkage between HRM and organizational outcomes can be explained by employee reactions in terms of attitudes and performance, if these reactions are largely idiosyncratic. Accordingly, this thesis aims to shed light on the processes that shape individuals’ reactions to HRM practices and how these processes may influence the variance or sharedness in such reactions among employees in organizations, units and teams. By theoretically developing and empirically examining the effects of employee perceptions of HRM practices from the perspective of ‘HRM as signaling’ and psychological contract theory, the main contributions of this thesis focus on the following research questions: i) How employee perceptions of the HRM practices relate to individual and collective employee attitudes and performance. ii) How employee perceptions of HRM practices relates to variance in employee attitudes and performance. iii) How collective employee performance mediates the relationship between employee perceptions of HRM practices and organizational performance. Regarding the first research questions the findings indicate that individuals do respond positively to HRM practices by adjusting their felt obligations towards the employer. This finding is in line with the idea of HRM as a signaling device where each HRM practice, implicitly or explicitly, sends signals to employees about promised rewards (inducements) and behaviors (obligations) expected in return. The relationship was also confirmed at the group level of analysis. What is more, variance was found to play an important role in that employee groups with more similar perceptions about the HRM system displayed a stronger relationship between HRM and employee obligations. Concerning the second question the findings were somewhat contradictory in that a strong HRM system was found negatively related to variance in employee performance but not employee obligations. Regarding the third question, the findings confirmed linkages between the HRM system and organizational performance at the group level and the HRM system and employee performance at the individual level. Also, the entire chain of links from the HRM system through variance in employee performance, and further through the level of employee performance to organizational performance was significant.
Resumo:
Partikkelisysteemien segregaatio eli erottuminen on ilmiö, jossa tasalaatuisen jauheseoksen komponenteilla on taipumus erota toisistaan. Jauheen erottumistaipumus riippuu partikkelien ominaisuuksista, ympäröivistä olosuhteista ja partikkelien välisistä vuorovaikutuksista. Segregaatiomekanismeja on esitetty kirjallisuudessa valtava määrä ja pienetkin erot partikkelien välisissä ominaisuuksissa ja vuorovaikutuksissa voivat johtaa täysin eri segregaatiomekanismeihin. Segregaatioilmiö on lääketeollisuuden näkökulmasta hyvin keskeinen, eikä sitä tunneta vielä riittävän hyvin, jotta siltä osattaisiin systemaattisesti välttyä. Nykyinen segregaatiotutkimus perustuu suurelta osin yrityksen ja erehdyksen kautta tapahtuvaan oppimiseen. Todellisen segregaatioilmiön ymmärtämiseen tarvittaisiin innovatiivisia tutkimusmenetelmiä. Kokeellisen osan tarkoituksena oli kehittää ja perustestata menetelmä, jolla voidaan tutkia erilaisten partikkelisysteemien erottumiskäyttäytymistä, ja käyttää tätä menetelmää farmaseuttisten rae- ja pellettiseosten segregaation tutkimiseen. Tavoitteena oli todistaa kehitetyn Babel-laitteen toimintaperiaatteen soveltuvuus partikkelisysteemien erottumiskäyttäytymisen tutkimiseen, mutta suoritetut kokeet olivat lähinnä menetelmän ja laitteen testausta. Ongelmiksi muodostuivat Babel-laitteen asettamat rajoitukset, partikkelien sähköistyminen ja partikkelien väliset vuorovaikutukset. Käytetyt suoraviivaiset lähestymistavat eivät riittäneet segregaation aiheuttamiseen Babel-laitteella. Vertikaalisen ravistelun seurauksena syntynyt konvektiopyörre esti segregaation syntymisen. Johtopäätöksenä voidaan sanoa, että Babel-laite mittaa hyvin ja toistettavasti sekä se kykenee erottamaan erikokoiset partikkelit ja erilaiset kokojakaumat toisistaan. Laitteen kehittämistavoitteena olisi saada segregaatio paremmin näkyviin jauheseoksissa ravistelun seurauksena. Tällöin voitaisiin tehdä päätelmiä jauheseoksen erottumistaipumuksesta ja systeemissä vallitsevista erottumismekanismeista. Laitteen ja menetelmän jatkokehittäminen voisi tuottaa hyödyllistä lisätietoa, mikä edesauttaisi segregaation ymmärtämistä ilmiönä entistä paremmin
Resumo:
The aim of this study was to investigate powder and tablet behavior at the level of mechanical interactions between single particles. Various aspects of powder packing, mixing, compression, and bond formation were examined with the aid of computer simulations. The packing and mixing simulations were based on spring forces interacting between particles. Packing and breakage simulations included systems in which permanent bonds were formed and broken between particles, based on their interaction strengths. During the process, a new simulation environment based on Newtonian mechanics and elementary interactions between the particles was created, and a new method for evaluating mixing was developed. Powder behavior is a complicated process, and many of its aspects are still unclear. Powders as a whole exhibit some aspects of solids and others of liquids. Therefore, their physics is far from clear. However, using relatively simple models based on particle-particle interaction, many powder properties could be replicated during this work. Simulated packing densities were similar to values reported in the literature. The method developed for describing powder mixing correlated well with previous methods. The new method can be applied to determine mixing in completely homogeneous materials, without dividing them into different components. As such, it can describe the efficiency of the mixing method, regardless of the powder's initial setup. The mixing efficiency at different vibrations was examined, and we found that certain combinations of amplitude, direction, and frequencies resulted in better mixing while using less energy. Simulations using exponential force potentials between particles were able to explain the elementary compression behavior of tablets, and create force distributions that were similar to the pressure distributions reported in the literature. Tablet-breaking simulations resulted in breaking strengths that were similar to measured tablet breaking strengths. In general, many aspects of powder behavior can be explained with mechanical interactions at the particle level, and single particle properties can be reliably linked to powder behavior with accurate simulations.
Resumo:
The aim of this thesis is to examine the skilled migrants’ satisfaction with the Helsinki Metropolitan Area. The examination is executed on three scales: housing, neighbourhoods and the city region. Specific focus is on the built environment and how it meets the needs of the migrants. The empirical data is formed of 25 semi-structured interviews with skilled migrants and additionally 5 expert interviews. Skilled and educated workforce is an increasingly important resource in the new economy, and cities are competing globally for talented workers. With aging population and a need to develop its innovational structure, the Helsinki Metropolitan Area needs migrant workforce. It has been stated that quality of place is a central factor for skilled migrants when choosing where to settle, and from this perspective their satisfaction with the region is significant. In housing, the skilled migrants found the price-quality ratio and the general sizes of apartments inadequate. The housing market is difficult for the migrants to approach, since they often do not speak Finnish and there are prejudices towards foreigners. The general quality of housing was rated well. On the neighbourhood level, the skilled migrants had settled in residential areas which are also preferred by the Finnish skilled workers. While the migrants showed suburban orientation in their settlement patterns, they were not concentrated in the suburban areas which host large shares of traditional immigrant groups. Migrants were usually satisfied with their neighbourhoods; however, part of the suburban dwellers were unsatisfied with the services and social life in their neighbourhoods. Considering the level of the city region, the most challenging feature for the skilled migrants was the social life. The migrants felt that the social environment is homogeneous and difficult to approach. The physical environment was generally rated well, the most appreciated features being public transportation, human scale of the Metropolitan Helsinki, cleanliness, and the urban nature. Urban culture and services were seen good for the city region’s size, but lacking in international comparison.