40 resultados para HOMOGENEOUS COPOLYMERS
Resumo:
This study is part of the joint project "The Genetic Epidemiology and Molecular Genetics of schizophrenia in Finland" between the Departments of Mental Health and Alcohol Research, and Molecular Medicine at the National Public Health Institute. In the study, we utilized three nationwide health care registers: 1) the Hospital Discharge Register, 2) the Free Medication Register, and 3) the Disability Pension Register, plus the National Population Register, in order to identify all patients with schizophrenia born from 1940 to 1976 (N=33,731) in Finland, and their first degree-relatives. 658 patients with at least one parent born in a homogeneous isolate in northeastern Finland were identified, as well as 4904 familial schizophrenia patients with at least two affected siblings from the whole country. The comparison group was derived from the Health 2000 Study. We collected case records and reassessed the register diagnosis. Were contacted the isolate patients and a random sample of patients from the whole country to make diagnostic clinical interviews and to assess the negative and positive symptoms and signs of schizophrenia. In addition to these patients, we interviewed siblings who were initially healthy according to the Hospital Discharge Register. Of those with a register diagnosis of schizophrenia, schizoaffective or schizophreniform disorder, 69% received a record-based consensus diagnosis and 63% an interview-based diagnosis of schizophrenia. Patients with schizophrenia having first-degree relatives with psychotic disorder had more severe affective flattening and alogia than those who were the only affected individuals in their family. The novel findings were: 1) The prevalence of schizophrenia in the isolate was relatively high based on register (1.5%), case record (0.9-1.3%), and interview (0.7-1.2%) data. 2) Isolate patients, regardless of their familial loading for schizophrenia, had less delusions and hallucinations than the whole country familial patients, which may be related to the genetic homogeneity in the isolate. This phenotype encourages the use of endophenotypes in genetic analyses instead of diagnoses alone. 3) The absence of register diagnosis does not confirm that siblings are healthy, because 7.7% of siblings had psychotic symptoms already before the register diagnoses were identified in 1991. For genetic research, the register diagnosis should therefore be reassessed using either a structured interview or a best- estimate case note consensus diagnosis. Structured clinical interview methods need be considered also in clinical practice.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Nucleation is the first step in the formation of a new phase inside a mother phase. Two main forms of nucleation can be distinguished. In homogeneous nucleation, the new phase is formed in a uniform substance. In heterogeneous nucleation, on the other hand, the new phase emerges on a pre-existing surface (nucleation site). Nucleation is the source of about 30% of all atmospheric aerosol which in turn has noticeable health effects and a significant impact on climate. Nucleation can be observed in the atmosphere, studied experimentally in the laboratory and is the subject of ongoing theoretical research. This thesis attempts to be a link between experiment and theory. By comparing simulation results to experimental data, the aim is to (i) better understand the experiments and (ii) determine where the theory needs improvement. Computational fluid dynamics (CFD) tools were used to simulate homogeneous onecomponent nucleation of n-alcohols in argon and helium as carrier gases, homogeneous nucleation in the water-sulfuric acid-system, and heterogeneous nucleation of water vapor on silver particles. In the nucleation of n-alcohols, vapor depletion, carrier gas effect and carrier gas pressure effect were evaluated, with a special focus on the pressure effect whose dependence on vapor and carrier gas properties could be specified. The investigation of nucleation in the water-sulfuric acid-system included a thorough analysis of the experimental setup, determining flow conditions, vapor losses, and nucleation zone. Experimental nucleation rates were compared to various theoretical approaches. We found that none of the considered theoretical descriptions of nucleation captured the role of water in the process at all relative humidities. Heterogeneous nucleation was studied in the activation of silver particles in a TSI 3785 particle counter which uses water as its working fluid. The role of the contact angle was investigated and the influence of incoming particle concentrations and homogeneous nucleation on counting efficiency determined.
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
The aims of the thesis are (1) to present a systematic evaluation of generation and its relevance as a sociological concept, (2) to reflect on how generational consciousness, i.e. generation as an object of collective identification that has social significance, can emerge and take shape, (3) to analyze empirically the generational experiences and consciousness of one specific generation, namely Finnish baby boomers (b. 1945 1950). The thesis contributes to the discussion on the social (as distinct from its genealogical) meaning of the concept of generation, launched by Karl Mannheim s classic Das Problem der Generationen (1928), in which the central idea is that a certain group of people is bonded together by a shared experience and that this bonding can result in a distinct self-consciousness. The thesis is comprised of six original articles and an extensive summarizing chapter. In the empirical articles, the baby boomers are studied on the basis of nationally representative survey data (N = 2628) and narrative life-story interviews (N = 38). In the article that discusses the connection of generations and social movements, the analysis is based on the member survey of Attac Finland (N = 1096). Three main themes were clarified in the thesis. (1) In the social sense the concept of generation is a modern, problematic, and ultimately a political concept. It served the interests of the intellectuals who developed the concept in the early 20th century and provided them, as an alternative to the concept of social class, a new way of think about social change and progress. The concept of generation is always coupled with the concept of Zeitgeist or some other controversial way of defining what is essential, i.e. what creates generations, in a given culture. Thus generation is, as a product of definition and classification struggles, a contested concept. The concept also clearly implies elitist connotations; the idea of some kind of vanguard (the elite) that represents an entire generation by proclaiming itself as its spokesman automatically creates a counterpart, namely the others in the peer group who are thought to be represented (the masses). (2) Generational consciousness cannot emerge as a result of any kind of automatic process or endogenously; it must be made. There has to be somebody who represents the generation in order for that generation to exist in people s minds and as an object of identification; generational experiences and their meanings must be articulated. Hence, social generations are, in a fundamental manner, discursively constructed. The articulations of generational experiences (speeches, writings, manifests, labels etc.) can be called as the discursive dimension of social generations, and through this notion, how public discourse shapes people s generational consciousness can be seen. Another important element in the process is collective memory, as generational consciousness often takes form only retrospectively. (3) Finnish baby boomers are not a united or homogeneous generation but are divided into many smaller sections with specific generational experiences and consciousnesses. The content of the generational consciousness of the baby boomers is heavily politically charged. A salient dividing line inside the age group is formed by individual attitudes towards so-called 1960s radicalism. Identification with the 1960s generation functions today as a positive self-definition of a certain small leftist elite group, and the values and characteristics usually connected with the idea of the 1960s generation do not represent the whole age group. On the contrary, among some of the members of the baby boomers, the generational identification is still directed by the experience of how traditional values were disgraced in the 1960s. As objects of identification, the neutral term baby boomers and the charged 1960s generation are totally different things, and therefore they should not be used as synonyms. Although the significance of the group of the 1960s generation is often overestimated, they are however special with respect to generational consciousness because they have presented themselves as the voice of the entire generation. Their generational interpretations have spread through the media with the help of certain iconic images of the generation insomuch that 1960s radicalism has become an indirect generational experience for other parts of the baby boom cohort as well.
Resumo:
Recently it has been recognized that evolutionary aspects play a major role in conservation issues of a species. In this thesis I have combined evolutionary research with conservation studies to provide new insight into these fields. The study object of this thesis is the house sparrow, a species that has features that makes it interesting for this type of study. The house sparrow has been ubiquitous almost all over the world. Even though being still abundant, several countries have reported major declines. These declines have taken place in a relatively short time covering both urban and rural habitats. In Finland this species has declined by more than two thirds in just over two decades. In addition, as the house sparrow lives only in human inhabited areas it can also raise public awareness to conservation issues. I used both an extensive museum collection of house sparrows collected in 1980s from all over Finland as well as samples collected in 2009 from 12 of the previously collected localities. I used molecular techniques to study neutral genetic variation within and genetic differentiation between the study populations. This knowledge I then combined with data gathered on morphometric measurements. In addition I analyzed eight heavy metals from the livers of house sparrows that lived in either rural or urban areas in the 1980s and evaluated the role of heavy metal pollution as a possible cause of the declines. Even though dispersal of house sparrows is limited I found that just as the declines started in 1980s the house sparrows formed a genetically panmictic population on the scale of the whole Finland. When compared to Norway, where neutral genetic divergence has been found even with small geographic distances, I concluded that this difference would be due to contrasting landscapes. In Finland the landscape is rather homogeneous facilitating the movements of these birds and maintaining gene flow even with the low dispersal. To see whether the declines have had an effect on the neutral genetic variation of the populations I did a comparison between the historical and contemporary genetic data. I showed that even though genetic diversity has not decreased due to the drastic declines the populations have indeed become more differentiated from each other. This shows that even in a still quite abundant species the declines can have an effect on the genetic variation. It is shown that genetic diversity and differentiation may approach their new equilibriums at different rates. This emphasizes the importance of studying both of them and if the latter has increased it should be taken as a warning sign of a possible loss of genetic diversity in the future. One of the factors suggested to be responsible for the house sparrow declines is heavy metal pollution. When studying the livers of house sparrows from 1980s I discovered higher levels of heavy metal concentrations in urban than rural habitats, but the levels of the metals were comparatively low and based on that heavy metal pollution does not seem to be a direct cause for the declines in Finland. However, heavy metals are known to decrease the amount of insects in urban areas and thus in the cities heavy metals may have an indirect effect on house sparrows. Although neutral genetic variation is an important tool for conservation genetics it does not tell the whole story. Since neutral genetic variation is not affected by selection, information can be one-sided. It is possible that even neutral genetic differentiation is low, there can be substantial variation in additive genetic traits indicating local adaptation. Therefore I performed a comparison between neutral genetic differentiation and phenotypic differentiation. I discovered that two traits out of seven are likely to be under directional selection, whereas the others could be affected by random genetic drift. Bergmann s rule may be behind the observed directional selection in wing length and body mass. These results highlight the importance of estimating both neutral and adaptive genetic variation.
Resumo:
We reformulate and extend our recently introduced quantum kinetic theory for interacting fermion and scalar fields. Our formalism is based on the coherent quasiparticle approximation (cQPA) where nonlocal coherence information is encoded in new spectral solutions at off-shell momenta. We derive explicit forms for the cQPA propagators in the homogeneous background and show that the collision integrals involving the new coherence propagators need to be resummed to all orders in gradient expansion. We perform this resummation and derive generalized momentum space Feynman rules including coherent propagators and modified vertex rules for a Yukawa interaction. As a result we are able to set up self-consistent quantum Boltzmann equations for both fermion and scalar fields. We present several examples of diagrammatic calculations and numerical applications including a simple toy model for coherent baryogenesis.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Many Finnish IT companies have gone through numerous organizational changes over the past decades. This book draws attention to how stability may be central to software product development experts and IT workers more generally, who continuously have to cope with such change in their workplaces. It does so by analyzing and theorizing change and stability as intertwined and co-existent, thus throwing light on how it is possible that, for example, even if ‘the walls fall down the blokes just code’ and maintain a sense of stability in their daily work. Rather than reproducing the picture of software product development as exciting cutting edge activities and organizational change as dramatic episodes, the study takes the reader beyond the myths surrounding these phenomena to the mundane practices, routines and organizings in product development during organizational change. An analysis of these ordinary practices offers insights into how software product development experts actively engage in constructing stability during organizational change through a variety of practices, including solidarity, homosociality, close relations to products, instrumental or functional views on products, preoccupations with certain tasks and humble obedience. Consequently, the study shows that it may be more appropriate to talk about varieties of stability, characterized by a multitude of practices of stabilizing rather than states of stagnation. Looking at different practices of stability in depth shows the creation of software as an arena for micro-politics, power relations and increasing pressures for order and formalization. The thesis gives particular attention to power relations and processes of positioning following organizational change: how social actors come to understand themselves in the context of ongoing organizational change, how they comply with and/or contest dominant meanings, how they identify and dis-identify with formalization, and how power relations often are reproduced despite dis-identification. Related to processes of positioning, the reader is also given a glimpse into what being at work in a male-dominated and relatively homogeneous work environment looks like. It shows how the strong presence of men or “blokes” of a particular age and education seems to become invisible in workplace talk that appears ‘non-conscious’ of gender.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).