868 resultados para University enrollment in Library Science
Resumo:
Early Childhood Education (ECE) has a long history of building foundations for children to achieve their full potential, enabling parents to participate in the economy while children are cared for, addressing poverty and disadvantage, and building individual, community and societal resources. In so doing, ECE has developed a set of cultural practices and ways of knowing that shape the field and the people who work within it. ECE, consequently, is frequently described as unique and special (Moss, 2006; Penn, 2011). This works to define and distinguish the field while, simultaneously, insulating it from other contexts, professions, and ideas. Recognising this dualism illuminates some of the risks and challenges of operating in an insular and isolated fashion. In the 21st century, there are new challenges for children, families and societies to which ECE must respond if it is to continue to be relevant. One major issue is how ECE contributes to transition towards more sustainable ways of living. Addressing this contemporary social problem is one from which Early Childhood teacher education has been largely absent (Davis & Elliott, 2014), despite the well recognised but often ignored role of education in contributing to sustainability. Because of its complexity, sustainability is sometimes referred to as a ‘wicked problem’ (Rittel & Webber, 1973; Australian Public Service Commission, 2007) requiring alternatives to ‘business as usual’ problem solving approaches. In this chapter, we propose that addressing such problems alongside disciplines other than Education enables the Early Childhood profession to have its eyes opened to new ways of thinking about our work, potentially liberating us from the limitations of our “unique” and idiosyncratic professional cultures. In our chapter, we focus on understandings of culture and diversity, looking to broaden these by exploring the different ‘cultures’ of the specialist fields of ECE and Design (in this project, we worked with students studying Architecture, Industrial Design, Landscape Architecture and Interior Design). We define culture not as it is typically represented, i.e. in relation to ideas and customs of particular ethnic and language groups, but to the ideas and practices of people working in different disciplines and professions. We assert that different specialisms have their own ‘cultural’ practices. Further, we propose that this kind of theoretical work helps us to reconsider ways in which ECE might be reframed and broadened to meet new challenges such as sustainability and as yet unknown future challenges and possibilities. We explore these matters by turning to preservice Early Childhood teacher education (in Australia) as a context in which traditional views of culture and diversity might be reconstructed. We are looking to push our specialist knowledge boundaries and to extend both preservice teachers and academics beyond their comfort zones by engaging in innovative interdisciplinary learning and teaching. We describe a case study of preservice Early Childhood teachers and designers working in collaborative teams, intersecting with a ‘real-world’ business partner. The joint learning task was the design of an early learning centre based on sustainable design principles and in which early Education for Sustainability (EfS) would be embedded Data were collected via focus group and individual interviews with students in ECE and Design. Our findings suggest that interdisciplinary teaching and learning holds considerable potential in dismantling taken-for-granted cultural practices, such that professional roles and identities might be reimagined and reconfigured. We conclude the chapter with provocations challenging the ways in which culture and diversity in the field of ECE might be reconsidered within teacher education.
Resumo:
Guanylyl cyclase C (GCC) is the receptor for the gastrointestinal hormones, guanylin, and uroguanylin, in addition to the bacterial heat-stable enterotoxins, which are one of the major causes of watery diarrhea the world over. GCC is expressed in intestinal cells, colorectal tumor tissue and tumors originating from metastasis of the colorectal carcinoma. We have earlier generated a monoclonal antibody to human GCC, GCC:B10, which was useful for the immunohistochemical localization of the receptor in the rat intestine (Nandi A et al., 1997, J Cell Biochem 66:500-511), and identified its epitope to a 63-amino acid stretch in the intracellular domain of GCC. In view of the potential that this antibody has for the identification of colorectal tumors, we have characterized the epitope for GCC:B10 in this study. Overlapping peptide synthesis indicated that the epitope was contained in the sequence HIPPENIFPLE. This sequence was unique to GCC, and despite a short stretch of homology with serum amyloid protein and pertussis toxin, no cross reactivity was detected. The core epitope was delineated using a random hexameric phage display library, and two categories of sequences were identified, containing either a single, or two adjacent proline residues. No sequence identified by phage display was identical to the epitope present in GCC, indicating that phage sequences represented mimotopes of the native epitope. Alignment of these sequences with HIPPENIFPLE suggested duplication of the recognition motif, which was confirmed by peptide synthesis. These studies allowed us not only to define the requirements of epitope recognition by GCC:B10 monoclonal antibody, but also to describe a novel means of epitope recognition involving topological mimicry and probable duplication of the cognate epitope in the native guanylyl cyclase C receptor sequence.
Resumo:
The aim of this dissertation was to explore how different types of prior knowledge influence student achievement and how different assessment methods influence the observed effect of prior knowledge. The project started by creating a model of prior knowledge which was tested in various science disciplines. Study I explored the contribution of different components of prior knowledge on student achievement in two different mathematics courses. The results showed that the procedural knowledge components which require higher-order cognitive skills predicted the final grades best and were also highly related to previous study success. The same pattern regarding the influence of prior knowledge was also seen in Study III which was a longitudinal study of the accumulation of prior knowledge in the context of pharmacy. The study analysed how prior knowledge from previous courses was related to student achievement in the target course. The results implied that students who possessed higher-level prior knowledge, that is, procedural knowledge, from previous courses also obtained higher grades in the more advanced target course. Study IV explored the impact of different types of prior knowledge on students’ readiness to drop out from the course, on the pace of completing the course and on the final grade. The study was conducted in the context of chemistry. The results revealed again that students who performed well in the procedural prior-knowledge tasks were also likely to complete the course in pre-scheduled time and get higher final grades. On the other hand, students whose performance was weak in the procedural prior-knowledge tasks were more likely to drop out or take a longer time to complete the course. Study II explored the issue of prior knowledge from another perspective. Study II aimed to analyse the interrelations between academic self-beliefs, prior knowledge and student achievement in the context of mathematics. The results revealed that prior knowledge was more predictive of student achievement than were other variables included in the study. Self-beliefs were also strongly related to student achievement, but the predictive power of prior knowledge overruled the influence of self-beliefs when they were included in the same model. There was also a strong correlation between academic self-beliefs and prior-knowledge performance. The results of all the four studies were consistent with each other indicating that the model of prior knowledge may be used as a potential tool for prior knowledge assessment. It is useful to make a distinction between different types of prior knowledge in assessment since the type of prior knowledge students possess appears to make a difference. The results implied that there indeed is variation between students’ prior knowledge and academic self-beliefs which influences student achievement. This should be taken into account in instruction.
Resumo:
This proceedings contains abstracts of 108 papers focusing on the different Tospovirus diseases of various crops and their thysanopteran vectors. The genetics of these pests and pathogens, the different methods used in their control and their geographical distribution are also highlighted.
Resumo:
The most common connective tissue research in meat science has been conducted on the properties of intramuscular connective tissue (IMCT) in connection with eating quality of meat. From the chemical and physical properties of meat, researchers have concluded that meat from animals younger than physiological maturity is the most tender. In pork and poultry, different challenges have been raised: the structure of cooked meat has weakened. In extreme cases raw porcine M. semimembranosus (SM) and in most turkey M. pectoralis superficialis (PS) can be peeled off in strips along the perimysium which surrounds the muscle fibre bundles (destructured meat), and when cooked, the slices disintegrate. Raw chicken meat is generally very soft and when cooked, it can even be mushy. The overall aim of this thesis was to study the thermal properties of IMCT in porcine SM in order to see if these properties were in association with destructured meat in pork and to characterise IMCT in poultry PS. First a 'baseline' study to characterise the thermal stability of IMCT in light coloured (SM and M. longissimus dorsi in pigs and PS in poultry) and dark coloured (M. infraspinatus in pigs and a combination of M. quadriceps femoris and M. iliotibialis lateralis in poultry) muscles was necessary. Thereafter, it was investigated whether the properties of muscle fibres differed in destructured and normal porcine muscles. Collagen content and also solubility of dark coloured muscles were higher than in light coloured muscles in pork and poultry. Collagen solubility was especially high in chicken muscles, approx. 30 %, in comparison to porcine and turkey muscles. However, collagen content and solubility were similar in destructured and normal porcine SM muscles. Thermal shrinkage of IMCT occurred at approximately 65 °C in pork and poultry. It occurred at lower temperature in light coloured muscles than in dark coloured muscles, although the difference was not always significant. The onset and peak temperatures of thermal shrinkage of IMCT were lower in destructured than in normal SM muscles, when the IMCT from SM muscles exhibiting ten lowest and ten highest ultimate pH values were investigated (onset: 59.4 °C vs. 60.7 °C, peak: 64.9 °C vs. 65.7 °C). As the destructured meat was paler than normal meat, the PSE (pale, soft, exudative) phenomenon could not be ruled out. The muscle fibre cross sectional area (CSA), the number of capillaries per muscle fibre CSA and per fibre and sarcomere length were similar in destructured and normal SM muscles. Drip loss was clearly higher in destructured than in normal SM muscles. In conclusion, collagen content and solubility and thermal shrinkage temperature vary between porcine and poultry muscles. One feature in the IMCT could not be directly associated with weakening of the meat structure. Poultry breast meat is very homogenous within the species.
Resumo:
PROFESSION, PERSON AND WORLDVIEW AT A TURNING POINT A Study of University Libraries and Library Staff in the Information Age 1970 - 2005 The incongruity between commonly held ideas of libraries and librarians and the changes that have occurred in libraries since 2000 provided the impulse for this work. The object is to find out if the changes of the last few decades have penetrated to a deeper level, that is, if they have caused changes in the values and world views of library staff and management. The study focuses on Finnish university libraries and the people who work in them. The theoretical framework is provided by the concepts of world view (values, the concept of time, man and self, the experience of the supernatural and the holy, community and leadership). The viewpoint, framework and methods of the study place it in the area of Comparative Religion by applying the world view framework. The time frame is the information age, which has deeply affected Finnish society and scholarly communication from 1970 to 2005. The source material of the study comprises 30 life stories; somewhat more than half of the stories come from the University of Helsinki, and the rest from the other eight universities. Written sources include library journals, planning documents and historical accounts of libraries. The experiences and research diaries of the research worker are also used as source material. The world view questions are discussed on different levels: 1) recognition of the differences and similarities in the values of the library sphere and the university sphere, 2) examination of the world view elements, community and leadership based on the life stories, and 3) the three phases of the effects of information technology on the university libraries and those who work in them. In comparing the values of the library sphere and the university sphere, the appreciation of creative work and culture as well as the founding principles of science and research are jointly held values. The main difference between the values in the university and library spheres concerns competition and service. Competition is part of the university as an institution of research work. The core value of the library sphere is service, which creates the essential ethos of library work. The ethical principles of the library sphere also include the values of democracy and equality as well as the value of intellectual freedom. There is also a difference between an essential value in the university sphere, the value of autonomy and academic freedom on the one hand, and the global value of the library sphere - organizing operations in a practical and efficient way on the other hand. Implementing this value can also create tension between the research community and the library. Based on the life stories, similarities can be found in the values of the library staff members. The value of service seems to be of primary importance for all who are committed to library work and who find it interesting and rewarding. The service role of the library staff can be extended from information services provider to include the roles of teacher, listener and even therapist, all needed in a competitive research community. The values of democracy and equality also emerge fairly strongly. The information age development has progressed in three phases in the libraries from the 1960s onward. In the third phase beginning in the mid 1990s, the increased usage of electronic resources has set fundamental changes in motion. The changes have affected basic values and the concept of time as well as the hierarchies and valuations within the library community. In addition to and as a replacement for the library possessing a local identity and operational model, a networked, global library is emerging. The changes have brought tension both to the library communities and to the relationship between the university community and the library. Future orientation can be said to be the key concept for change; it affects where the ideals and models for operations are taken from. Future orientation manifests itself as changes in metaphors, changes in the model of a good librarian and as communal valuations. Tension between the libraries and research communities can arise if the research community pictures the library primarily as a traditional library building with a local identity, whereas the 21st century library staff and directors are affected by future orientation and membership in a networked library sphere, working proactively to develop their libraries.
Resumo:
Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly.
Resumo:
Activation of inflammatory immune responses during granuloma formation by the host upon infection of mycobacteria is one of the crucial steps that is often associated with tissue remodeling and breakdown of the extracellular matrix. In these complex processes, cyclooxygenase-2 (COX-2) plays a major role in chronic inflammation and matrix metalloproteinase-9 (MMP-9) significantly in tissue remodeling. In this study, we investigated the molecular mechanisms underlying Phosphatidyl-myo-inositol dimannosides (PIM2), an integral component of the mycobacterial envelope, triggered COX-2 and MMP-9 expression in macrophages. PIM2 triggers the activation of Phosphoinositide-3 Kinase (PI3K) and Notch1 signaling leading to COX-2 and MMP-9 expression in a Toll-like receptor 2 (TLR2)-MyD88 dependent manner. Notch1 signaling perturbations data demonstrate the involvement of the cross-talk with members of PI3K and Mitogen activated protein kinase pathway. Enforced expression of the cleaved Notch1 in macrophages induces the expression of COX-2 and MMP-9. PIM2 triggered significant p65 nuclear factor-kappa B (NF-kappa B) nuclear translocation that was dependent on activation of PI3K or Notch1 signaling. Furthermore, COX-2 and MMP-9 expression requires Notch1 mediated recruitment of uppressor of Hairless (CSL) and NF-kappa B to respective promoters. Inhibition of PIM2 induced COX-2 resulted in marked reduction in MMP-9 expression clearly implicating the role of COX-2 dependent signaling events in driving the MMP-9 expression. Taken together, these data implicate PI3K and Notch1 signaling as obligatory early proximal signaling events during PIM2 induced COX-2 and MMP-9 expression in macrophages.
Resumo:
Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.
Resumo:
Background: Cancer stem cells exhibit close resemblance to normal stem cells in phenotype as well as function. Hence, studying normal stem cell behavior is important in understanding cancer pathogenesis. It has recently been shown that human breast stem cells can be enriched in suspension cultures as mammospheres. However, little is known about the behavior of these cells in long-term cultures. Since extensive self-renewal potential is the hallmark of stem cells, we undertook a detailed phenotypic and functional characterization of human mammospheres over long-term passages. Methodology: Single cell suspensions derived from human breast `organoids' were seeded in ultra low attachment plates in serum free media. Resulting primary mammospheres after a week (termed T1 mammospheres) were subjected to passaging every 7th day leading to the generation of T2, T3, and T4 mammospheres. Principal Findings: We show that primary mammospheres contain a distinct side-population (SP) that displays a CD24(low)/CD44(low) phenotype, but fails to generate mammospheres. Instead, the mammosphere-initiating potential rests within the CD44(high)/CD24(low) cells, in keeping with the phenotype of breast cancer-initiating cells. In serial sphere formation assays we find that even though primary (T1) mammospheres show telomerase activity and fourth passage T4 spheres contain label-retaining cells, they fail to initiate new mammospheres beyond T5. With increasing passages, mammospheres showed an increase in smaller sized spheres, reduction in proliferation potential and sphere forming efficiency, and increased differentiation towards the myoepithelial lineage. Significantly, staining for senescence-associated beta-galactosidase activity revealed a dramatic increase in the number of senescent cells with passage, which might in part explain the inability to continuously generate mammospheres in culture. Conclusions: Thus, the self-renewal potential of human breast stem cells is exhausted within five in vitro passages of mammospheres, suggesting the need for further improvisation in culture conditions for their long-term maintenance.
Resumo:
The genus Salmonella includes many pathogens of great medical and veterinary importance. Bacteria belonging to this genus are very closely related to those belonging to the genus Escherichia. lacZYA operon and lacI are present in Escherichia coli, but not in Salmonella enterica. It has been proposed that Salmonella has lost lacZYA operon and lacI during evolution. In this study, we have investigated the physiological and evolutionary significance of the absence of lacI in Salmonella enterica. Using murine model of typhoid fever, we show that the expression of Lacl causes a remarkable reduction in the virulence of Salmonella enterica. Lacl also suppresses the ability of Salmonella enterica to proliferate inside murine macrophages. Microarray analysis revealed that Lacl interferes with the expression of virulence genes of Salmonella pathogenicity island 2. This effect was confirmed by RT-PCR and Western blot analysis. Interestingly, we found that SBG0326 of Salmonella bongori is homologous to lacI of Escherichia coli. Salmonella bongori is the only other species of the genus Salmonella and it lacks the virulence genes of Salmonella pathogenicity island 2. Overall, our results demonstrate that Lacl is an antivirulence factor of Salmonella enterica and suggest that absence of lacI has facilitated the acquisition of virulence genes of Salmonella pathogenicity island 2 in Salmonella enterica making it a successful systemic pathogen.
Resumo:
There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Nucleation is the first step in the formation of a new phase inside a mother phase. Two main forms of nucleation can be distinguished. In homogeneous nucleation, the new phase is formed in a uniform substance. In heterogeneous nucleation, on the other hand, the new phase emerges on a pre-existing surface (nucleation site). Nucleation is the source of about 30% of all atmospheric aerosol which in turn has noticeable health effects and a significant impact on climate. Nucleation can be observed in the atmosphere, studied experimentally in the laboratory and is the subject of ongoing theoretical research. This thesis attempts to be a link between experiment and theory. By comparing simulation results to experimental data, the aim is to (i) better understand the experiments and (ii) determine where the theory needs improvement. Computational fluid dynamics (CFD) tools were used to simulate homogeneous onecomponent nucleation of n-alcohols in argon and helium as carrier gases, homogeneous nucleation in the water-sulfuric acid-system, and heterogeneous nucleation of water vapor on silver particles. In the nucleation of n-alcohols, vapor depletion, carrier gas effect and carrier gas pressure effect were evaluated, with a special focus on the pressure effect whose dependence on vapor and carrier gas properties could be specified. The investigation of nucleation in the water-sulfuric acid-system included a thorough analysis of the experimental setup, determining flow conditions, vapor losses, and nucleation zone. Experimental nucleation rates were compared to various theoretical approaches. We found that none of the considered theoretical descriptions of nucleation captured the role of water in the process at all relative humidities. Heterogeneous nucleation was studied in the activation of silver particles in a TSI 3785 particle counter which uses water as its working fluid. The role of the contact angle was investigated and the influence of incoming particle concentrations and homogeneous nucleation on counting efficiency determined.
Resumo:
The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.