17 resultados para University enrollment in Library Science
em Helda - Digital Repository of University of Helsinki
Resumo:
Introduction This case study is based on the experiences with the Electronic Journal of Information Technology in Construction (ITcon), founded in 1995. Development This journal is an example of a particular category of open access journals, which use neither author charges nor subscriptions to finance their operations, but rely largely on unpaid voluntary work in the spirit of the open source movement. The journal has, after some initial struggle, survived its first decade and is now established as one of half-a-dozen peer reviewed journals in its field. Operations The journal publishes articles as they become ready, but creates virtual issues through alerting messages to “subscribers”. It has also started to publish special issues, since this helps in attracting submissions, and also helps in sharing the work-load of review management. From the start the journal adopted a rather traditional layout of the articles. After the first few years the HTML version was dropped and papers are only published in PDF format. Performance The journal has recently been benchmarked against the competing journals in its field. Its acceptance rate of 53% is slightly higher and its average turnaround time of seven months almost a year faster compared to those journals in the sample for which data could be obtained. The server log files for the past three years have also been studied. Conclusions Our overall experience demonstrates that it is possible to publish this type of OA journal, with a yearly publishing volume equal to a quarterly journal and involving the processing of some fifty submissions a year, using a networked volunteer-based organization.
Resumo:
The aim of this dissertation was to explore how different types of prior knowledge influence student achievement and how different assessment methods influence the observed effect of prior knowledge. The project started by creating a model of prior knowledge which was tested in various science disciplines. Study I explored the contribution of different components of prior knowledge on student achievement in two different mathematics courses. The results showed that the procedural knowledge components which require higher-order cognitive skills predicted the final grades best and were also highly related to previous study success. The same pattern regarding the influence of prior knowledge was also seen in Study III which was a longitudinal study of the accumulation of prior knowledge in the context of pharmacy. The study analysed how prior knowledge from previous courses was related to student achievement in the target course. The results implied that students who possessed higher-level prior knowledge, that is, procedural knowledge, from previous courses also obtained higher grades in the more advanced target course. Study IV explored the impact of different types of prior knowledge on students’ readiness to drop out from the course, on the pace of completing the course and on the final grade. The study was conducted in the context of chemistry. The results revealed again that students who performed well in the procedural prior-knowledge tasks were also likely to complete the course in pre-scheduled time and get higher final grades. On the other hand, students whose performance was weak in the procedural prior-knowledge tasks were more likely to drop out or take a longer time to complete the course. Study II explored the issue of prior knowledge from another perspective. Study II aimed to analyse the interrelations between academic self-beliefs, prior knowledge and student achievement in the context of mathematics. The results revealed that prior knowledge was more predictive of student achievement than were other variables included in the study. Self-beliefs were also strongly related to student achievement, but the predictive power of prior knowledge overruled the influence of self-beliefs when they were included in the same model. There was also a strong correlation between academic self-beliefs and prior-knowledge performance. The results of all the four studies were consistent with each other indicating that the model of prior knowledge may be used as a potential tool for prior knowledge assessment. It is useful to make a distinction between different types of prior knowledge in assessment since the type of prior knowledge students possess appears to make a difference. The results implied that there indeed is variation between students’ prior knowledge and academic self-beliefs which influences student achievement. This should be taken into account in instruction.
Resumo:
The most common connective tissue research in meat science has been conducted on the properties of intramuscular connective tissue (IMCT) in connection with eating quality of meat. From the chemical and physical properties of meat, researchers have concluded that meat from animals younger than physiological maturity is the most tender. In pork and poultry, different challenges have been raised: the structure of cooked meat has weakened. In extreme cases raw porcine M. semimembranosus (SM) and in most turkey M. pectoralis superficialis (PS) can be peeled off in strips along the perimysium which surrounds the muscle fibre bundles (destructured meat), and when cooked, the slices disintegrate. Raw chicken meat is generally very soft and when cooked, it can even be mushy. The overall aim of this thesis was to study the thermal properties of IMCT in porcine SM in order to see if these properties were in association with destructured meat in pork and to characterise IMCT in poultry PS. First a 'baseline' study to characterise the thermal stability of IMCT in light coloured (SM and M. longissimus dorsi in pigs and PS in poultry) and dark coloured (M. infraspinatus in pigs and a combination of M. quadriceps femoris and M. iliotibialis lateralis in poultry) muscles was necessary. Thereafter, it was investigated whether the properties of muscle fibres differed in destructured and normal porcine muscles. Collagen content and also solubility of dark coloured muscles were higher than in light coloured muscles in pork and poultry. Collagen solubility was especially high in chicken muscles, approx. 30 %, in comparison to porcine and turkey muscles. However, collagen content and solubility were similar in destructured and normal porcine SM muscles. Thermal shrinkage of IMCT occurred at approximately 65 °C in pork and poultry. It occurred at lower temperature in light coloured muscles than in dark coloured muscles, although the difference was not always significant. The onset and peak temperatures of thermal shrinkage of IMCT were lower in destructured than in normal SM muscles, when the IMCT from SM muscles exhibiting ten lowest and ten highest ultimate pH values were investigated (onset: 59.4 °C vs. 60.7 °C, peak: 64.9 °C vs. 65.7 °C). As the destructured meat was paler than normal meat, the PSE (pale, soft, exudative) phenomenon could not be ruled out. The muscle fibre cross sectional area (CSA), the number of capillaries per muscle fibre CSA and per fibre and sarcomere length were similar in destructured and normal SM muscles. Drip loss was clearly higher in destructured than in normal SM muscles. In conclusion, collagen content and solubility and thermal shrinkage temperature vary between porcine and poultry muscles. One feature in the IMCT could not be directly associated with weakening of the meat structure. Poultry breast meat is very homogenous within the species.
Resumo:
PROFESSION, PERSON AND WORLDVIEW AT A TURNING POINT A Study of University Libraries and Library Staff in the Information Age 1970 - 2005 The incongruity between commonly held ideas of libraries and librarians and the changes that have occurred in libraries since 2000 provided the impulse for this work. The object is to find out if the changes of the last few decades have penetrated to a deeper level, that is, if they have caused changes in the values and world views of library staff and management. The study focuses on Finnish university libraries and the people who work in them. The theoretical framework is provided by the concepts of world view (values, the concept of time, man and self, the experience of the supernatural and the holy, community and leadership). The viewpoint, framework and methods of the study place it in the area of Comparative Religion by applying the world view framework. The time frame is the information age, which has deeply affected Finnish society and scholarly communication from 1970 to 2005. The source material of the study comprises 30 life stories; somewhat more than half of the stories come from the University of Helsinki, and the rest from the other eight universities. Written sources include library journals, planning documents and historical accounts of libraries. The experiences and research diaries of the research worker are also used as source material. The world view questions are discussed on different levels: 1) recognition of the differences and similarities in the values of the library sphere and the university sphere, 2) examination of the world view elements, community and leadership based on the life stories, and 3) the three phases of the effects of information technology on the university libraries and those who work in them. In comparing the values of the library sphere and the university sphere, the appreciation of creative work and culture as well as the founding principles of science and research are jointly held values. The main difference between the values in the university and library spheres concerns competition and service. Competition is part of the university as an institution of research work. The core value of the library sphere is service, which creates the essential ethos of library work. The ethical principles of the library sphere also include the values of democracy and equality as well as the value of intellectual freedom. There is also a difference between an essential value in the university sphere, the value of autonomy and academic freedom on the one hand, and the global value of the library sphere - organizing operations in a practical and efficient way on the other hand. Implementing this value can also create tension between the research community and the library. Based on the life stories, similarities can be found in the values of the library staff members. The value of service seems to be of primary importance for all who are committed to library work and who find it interesting and rewarding. The service role of the library staff can be extended from information services provider to include the roles of teacher, listener and even therapist, all needed in a competitive research community. The values of democracy and equality also emerge fairly strongly. The information age development has progressed in three phases in the libraries from the 1960s onward. In the third phase beginning in the mid 1990s, the increased usage of electronic resources has set fundamental changes in motion. The changes have affected basic values and the concept of time as well as the hierarchies and valuations within the library community. In addition to and as a replacement for the library possessing a local identity and operational model, a networked, global library is emerging. The changes have brought tension both to the library communities and to the relationship between the university community and the library. Future orientation can be said to be the key concept for change; it affects where the ideals and models for operations are taken from. Future orientation manifests itself as changes in metaphors, changes in the model of a good librarian and as communal valuations. Tension between the libraries and research communities can arise if the research community pictures the library primarily as a traditional library building with a local identity, whereas the 21st century library staff and directors are affected by future orientation and membership in a networked library sphere, working proactively to develop their libraries.
Resumo:
There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Nucleation is the first step in the formation of a new phase inside a mother phase. Two main forms of nucleation can be distinguished. In homogeneous nucleation, the new phase is formed in a uniform substance. In heterogeneous nucleation, on the other hand, the new phase emerges on a pre-existing surface (nucleation site). Nucleation is the source of about 30% of all atmospheric aerosol which in turn has noticeable health effects and a significant impact on climate. Nucleation can be observed in the atmosphere, studied experimentally in the laboratory and is the subject of ongoing theoretical research. This thesis attempts to be a link between experiment and theory. By comparing simulation results to experimental data, the aim is to (i) better understand the experiments and (ii) determine where the theory needs improvement. Computational fluid dynamics (CFD) tools were used to simulate homogeneous onecomponent nucleation of n-alcohols in argon and helium as carrier gases, homogeneous nucleation in the water-sulfuric acid-system, and heterogeneous nucleation of water vapor on silver particles. In the nucleation of n-alcohols, vapor depletion, carrier gas effect and carrier gas pressure effect were evaluated, with a special focus on the pressure effect whose dependence on vapor and carrier gas properties could be specified. The investigation of nucleation in the water-sulfuric acid-system included a thorough analysis of the experimental setup, determining flow conditions, vapor losses, and nucleation zone. Experimental nucleation rates were compared to various theoretical approaches. We found that none of the considered theoretical descriptions of nucleation captured the role of water in the process at all relative humidities. Heterogeneous nucleation was studied in the activation of silver particles in a TSI 3785 particle counter which uses water as its working fluid. The role of the contact angle was investigated and the influence of incoming particle concentrations and homogeneous nucleation on counting efficiency determined.
Resumo:
The planet Mars is the Earth's neighbour in the Solar System. Planetary research stems from a fundamental need to explore our surroundings, typical for mankind. Manned missions to Mars are already being planned, and understanding the environment to which the astronauts would be exposed is of utmost importance for a successful mission. Information of the Martian environment given by models is already now used in designing the landers and orbiters sent to the red planet. In particular, studies of the Martian atmosphere are crucial for instrument design, entry, descent and landing system design, landing site selection, and aerobraking calculations. Research of planetary atmospheres can also contribute to atmospheric studies of the Earth via model testing and development of parameterizations: even after decades of modeling the Earth's atmosphere, we are still far from perfect weather predictions. On a global level, Mars has also been experiencing climate change. The aerosol effect is one of the largest unknowns in the present terrestrial climate change studies, and the role of aerosol particles in any climate is fundamental: studies of climate variations on another planet can help us better understand our own global change. In this thesis I have used an atmospheric column model for Mars to study the behaviour of the lowest layer of the atmosphere, the planetary boundary layer (PBL), and I have developed nucleation (particle formation) models for Martian conditions. The models were also coupled to study, for example, fog formation in the PBL. The PBL is perhaps the most significant part of the atmosphere for landers and humans, since we live in it and experience its state, for example, as gusty winds, nightfrost, and fogs. However, PBL modelling in weather prediction models is still a difficult task. Mars hosts a variety of cloud types, mainly composed of water ice particles, but also CO2 ice clouds form in the very cold polar night and at high altitudes elsewhere. Nucleation is the first step in particle formation, and always includes a phase transition. Cloud crystals on Mars form from vapour to ice on ubiquitous, suspended dust particles. Clouds on Mars have a small radiative effect in the present climate, but it may have been more important in the past. This thesis represents an attempt to model the Martian atmosphere at the smallest scales with high resolution. The models used and developed during the course of the research are useful tools for developing and testing parameterizations for larger-scale models all the way up to global climate models, since the small-scale models can describe processes that in the large-scale models are reduced to subgrid (not explicitly resolved) scale.
Resumo:
Volatile organic compounds (VOCs) affect atmospheric chemistry and thereafter also participate in the climate change in many ways. The long-lived greenhouse gases and tropospheric ozone are the most important radiative forcing components warming the climate, while aerosols are the most important cooling component. VOCs can have warming effects on the climate: they participate in tropospheric ozone formation and compete for oxidants with the greenhouse gases thus, for example, lengthening the atmospheric lifetime of methane. Some VOCs, on the other hand, cool the atmosphere by taking part in the formation of aerosol particles. Some VOCs, in addition, have direct health effects, such as carcinogenic benzene. VOCs are emitted into the atmosphere in various processes. Primary emissions of VOC include biogenic emissions from vegetation, biomass burning and human activities. VOCs are also produced in secondary emissions from the reactions of other organic compounds. Globally, forests are the largest source of VOC entering the atmosphere. This thesis focuses on the measurement results of emissions and concentrations of VOCs in one of the largest vegetation zones in the world, the boreal zone. An automated sampling system was designed and built for continuous VOC concentration and emission measurements with a proton transfer reaction - mass spectrometer (PTR-MS). The system measured one hour at a time in three-hourly cycles: 1) ambient volume mixing-ratios of VOCs in the Scots-pine-dominated boreal forest, 2) VOC fluxes above the canopy, and 3) VOC emissions from Scots pine shoots. In addition to the online PTR-MS measurements, we determined the composition and seasonality of the VOC emissions from a Siberian larch with adsorbent samples and GC-MS analysis. The VOC emissions from Siberian larch were reported for the fist time in the literature. The VOC emissions were 90% monoterpenes (mainly sabinene) and the rest sesquiterpenes (mainly a-farnesene). The normalized monoterpene emission potentials were highest in late summer, rising again in late autumn. The normalized sesquiterpene emission potentials were also highest in late summer, but decreased towards the autumn. The emissions of mono- and sesquiterpenes from the deciduous Siberian larch, as well as the emissions of monoterpenes measured from the evergreen Scots pine, were well described by the temperature-dependent algorithm. In the Scots-pine-dominated forest, canopy-scale emissions of monoterpenes and oxygenated VOCs (OVOCs) were of the same magnitude. Methanol and acetone were the most abundant OVOCs emitted from the forest and also in the ambient air. Annually, methanol and mixing ratios were of the order of 1 ppbv. The monoterpene and sum of isoprene 2-methyl-3-buten-2-ol (MBO) volume mixing-ratios were an order of magnitude lower. The majority of the monoterpene and methanol emissions from the Scots-pinedominated forest were explained by emissions from Scots pine shoots. The VOCs were divided into three classes based on the dynamics of the summer-time concentrations: 1) reactive compounds with local biological, anthropogenic or chemical sources (methanol, acetone, butanol and hexanal), 2) compounds whose emissions are only temperaturedependent (monoterpenes), 3) long-lived compounds (benzene, acetaldehyde). Biogenic VOC (methanol, acetone, isoprene MBO and monoterpene) volume mixing-ratios had clear diurnal patterns during summer. The ambient mixing ratios of other VOCs did not show this behaviour. During winter we did not observe systematical diurnal cycles for any of the VOCs. Different sources, removal processes and turbulent mixing explained the dynamics of the measured mixing-ratios qualitatively. However, quantitative understanding will require longterm emission measurements of the OVOCs and the use of comprehensive chemistry models. Keywords: Hydrocarbons, VOC, fluxes, volume mixing-ratio, boreal forest
Resumo:
The thesis examines the intensification and characteristics of a policy that emphasises economic competitiveness in Finland during the 1990s and early 2000s. This accentuation of economic objectives is studied at the level of national policy-making as well as at the regional level through the policies and strategies of cities and three universities in the Helsinki region. By combining the analysis of state policies, urban strategies and university activities, the study illustrates the pervasiveness of the objective of economic competitiveness and growth across these levels and sheds light on the features and contradictions of these policies on a broad scale. The thesis is composed of five research articles and a summary article. At the level of national policies, the central focus of the thesis is on the growing role of science and technology policy as a state means to promote structural economic change and its transformation towards a broader, yet ambivalent concept of innovation policy. This shift brings forward a tension between an increasing emphasis on economic aspects – innovations and competitiveness – as well as the expanding scope of issues across a wide range of policy sectors that are being subsumed under this market- and economy oriented framework. Related to science and technology policy, attention is paid to adjustments in university policy in which there has been increasing pressure for efficiency, rationalisation and commercialisation of academic activities. Furthermore, political efforts to build an information society through the application of information and communication technologies are analysed with particular attention to the balance between economic and social objectives. Finally, changes in state regional policy priorities and the tendency towards competitiveness are addressed. At the regional level, the focus of the thesis is on the policies of the cities in Finland’s capital region as well as strategies of three universities operating in the region, namely the University of Helsinki, Helsinki University of technology and Helsinki School of Economics. As regards the urban level, the main focus is on the changes and characteristics of the urban economic development policy of the City of Helsinki. With respect to the universities, the thesis examines their attempts to commercialise research and thus bring academic research closer to economic interests, and pays particular attention to the contradictions of commercialisation. Related to the universities, the activities of three intermediary organisations that the universities have established in order to increase cooperation with industry are analysed. These organisations are the Helsinki Science Park, Otaniemi International Innovation Centre and LTT Research Ltd. The summary article provides a synthesis of the material presented in the five original articles and relates the results of the articles to a broader discussion concerning the emergence of competition states and entrepreneurial cities and regions. The main points of reference are Bob Jessop’s and Neil Brenner’s theses on state and urban-regional restructuring. The empirical results and considerations from Finland and the Helsinki region are used to comment on, specify and criticise specific parts of the two theses.
Resumo:
Interaction between forests and the atmosphere occurs by radiative and turbulent transport. The fluxes of energy and mass between surface and the atmosphere directly influence the properties of the lower atmosphere and in longer time scales the global climate. Boreal forest ecosystems are central in the global climate system, and its responses to human activities, because they are significant sources and sinks of greenhouse gases and of aerosol particles. The aim of the present work was to improve our understanding on the existing interplay between biologically active canopy, microenvironment and turbulent flow and quantify. In specific, the aim was to quantify the contribution of different canopy layers to whole forest fluxes. For this purpose, long-term micrometeorological and ecological measurements made in a Scots pine (Pinus sylvestris) forest at SMEAR II research station in Southern Finland were used. The properties of turbulent flow are strongly modified by the interaction between the canopy elements: momentum is efficiently absorbed in the upper layers of the canopy, mean wind speed and turbulence intensities decrease rapidly towards the forest floor and power spectra is modulated by spectral short-cut . In the relative open forest, diabatic stability above the canopy explained much of the changes in velocity statistics within the canopy except in strongly stable stratification. Large eddies, ranging from tens to hundred meters in size, were responsible for the major fraction of turbulent transport between a forest and the atmosphere. Because of this, the eddy-covariance (EC) method proved to be successful for measuring energy and mass exchange inside a forest canopy with exception of strongly stable conditions. Vertical variations of within canopy microclimate, light attenuation in particular, affect strongly the assimilation and transpiration rates. According to model simulations, assimilation rate decreases with height more rapidly than stomatal conductance (gs) and transpiration and, consequently, the vertical source-sink distributions for carbon dioxide (CO2) and water vapor (H2O) diverge. Upscaling from a shoot scale to canopy scale was found to be sensitive to chosen stomatal control description. The upscaled canopy level CO2 fluxes can vary as much as 15 % and H2O fluxes 30 % even if the gs models are calibrated against same leaf-level dataset. A pine forest has distinct overstory and understory layers, which both contribute significantly to canopy scale fluxes. The forest floor vegetation and soil accounted between 18 and 25 % of evapotranspiration and between 10 and 20 % of sensible heat exchange. Forest floor was also an important deposition surface for aerosol particles; between 10 and 35 % of dry deposition of particles within size range 10 30 nm occurred there. Because of the northern latitudes, seasonal cycle of climatic factors strongly influence the surface fluxes. Besides the seasonal constraints, partitioning of available energy to sensible and latent heat depends, through stomatal control, on the physiological state of the vegetation. In spring, available energy is consumed mainly as sensible heat and latent heat flux peaked about two months later, in July August. On the other hand, annual evapotranspiration remains rather stable over range of environmental conditions and thus any increase of accumulated radiation affects primarily the sensible heat exchange. Finally, autumn temperature had strong effect on ecosystem respiration but its influence on photosynthetic CO2 uptake was restricted by low radiation levels. Therefore, the projected autumn warming in the coming decades will presumably reduce the positive effects of earlier spring recovery in terms of carbon uptake potential of boreal forests.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Atmospheric particles affect the radiation balance of the Earth and thus the climate. New particle formation from nucleation has been observed in diverse atmospheric conditions but the actual formation path is still unknown. The prevailing conditions can be exploited to evaluate proposed formation mechanisms. This study aims to improve our understanding of new particle formation from the view of atmospheric conditions. The role of atmospheric conditions on particle formation was studied by atmospheric measurements, theoretical model simulations and simulations based on observations. Two separate column models were further developed for aerosol and chemical simulations. Model simulations allowed us to expand the study from local conditions to varying conditions in the atmospheric boundary layer, while the long-term measurements described especially characteristic mean conditions associated with new particle formation. The observations show statistically significant difference in meteorological and back-ground aerosol conditions between observed event and non-event days. New particle formation above boreal forest is associated with strong convective activity, low humidity and low condensation sink. The probability of a particle formation event is predicted by an equation formulated for upper boundary layer conditions. The model simulations call into question if kinetic sulphuric acid induced nucleation is the primary particle formation mechanism in the presence of organic vapours. Simultaneously the simulations show that ignoring spatial and temporal variation in new particle formation studies may lead to faulty conclusions. On the other hand, the theoretical simulations indicate that short-scale variations in temperature and humidity unlikely have a significant effect on mean binary water sulphuric acid nucleation rate. The study emphasizes the significance of mixing and fluxes in particle formation studies, especially in the atmospheric boundary layer. The further developed models allow extensive aerosol physical and chemical studies in the future.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
One of the effects of the Internet is that the dissemination of scientific publications in a few years has migrated to electronic formats. The basic business practices between libraries and publishers for selling and buying the content, however, have not changed much. In protest against the high subscription prices of mainstream publishers, scientists have started Open Access (OA) journals and e-print repositories, which distribute scientific information freely. Despite widespread agreement among academics that OA would be the optimal distribution mode for publicly financed research results, such channels still constitute only a marginal phenomenon in the global scholarly communication system. This paper discusses, in view of the experiences of the last ten years, the many barriers hindering a rapid proliferation of Open Access. The discussion is structured according to the main OA channels; peer-reviewed journals for primary publishing, subject- specific and institutional repositories for secondary parallel publishing. It also discusses the types of barriers, which can be classified as consisting of the legal framework, the information technology infrastructure, business models, indexing services and standards, the academic reward system, marketing, and critical mass.