143 resultados para Historical Methods.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
This three-phase design research describes the modelling processes for DC-circuit phenomena. The first phase presents an analysis of the development of the DC-circuit historical models in the context of constructing Volta s pile at the turn of the 18th century. The second phase involves the designing of a teaching experiment for comprehensive school third graders. Among other considerations, the design work utilises the results of the first phase and research literature of pupils mental models for DC-circuit phenomena. The third phase of the research was concerned with the realisation of the planned teaching experiment. The aim of this phase was to study the development of the external representations of DC-circuit phenomena in a small group of third graders. The aim of the study has been to search for new ways to guide pupils to learn DC-circuit phenomena while emphasing understanding at the qualitative level. Thus, electricity, which has been perceived as a difficult and abstract subject, could be learnt more comprehensively. Especially, the research of younger pupils learning of electricity concepts has not been of great interest at the international level, although DC-circuit phenomena are also taught in the lower classes of comprehensive schools. The results of this study are important, because there has tended to be more teaching of natural sciences in the lower classes of comprehensive schools, and attempts are being made to develop this trend in Finland. In the theoretical part of the research an Experimental-centred representation approach, which emphasises the role of experimentalism in the development of pupil s representations, is created. According to this approach learning at the qualitative level consists of empirical operations like experimenting, observations, perception, and prequantification of nature phenomena, and modelling operations like explaining and reasoning. Besides planning teaching, the new approach can be used as an analysis tool in describing both historical modelling and the development of pupils representations. In the first phase of the study, the research question was: How did the historical models of DC-circuit phenomena develop in Volta s time? The analysis uncovered three qualitative historical models associated with the historical concept formation process. The models include conceptions of the electric circuit as a scene in the DC-circuit phenomena, the comparative electric-current phenomenon as a cause of different observable effect phenomena, and the strength of the battery as a cause of the electric-current phenomenon. These models describe the concept formation process and its phases in Volta s time. The models are portrayed in the analysis using fragments of the models, where observation-based fragments and theoretical fragements are distinguished from each other. The results emphasise the significance of the qualitative concept formation and the meaning of language in the historical modelling of DC-circuit phenomena. For this reason these viewpoints are stressed in planning the teaching experiment in the second phase of the research. In addition, the design process utilised the experimentation behind the historical models of DC-circuit phenomena In the third phase of the study the research question is as follows: How will the small group s external representations of DC-circuit phenomena develop during the teaching experiment? The main question is divided into the following two sub questions: What kind of talk exists in the small group s learning? What kinds of external representations for DC-circuit phenomena exist in the small group discourse during the teaching experiment? The analysis revealed that the teaching experiment of the small group succeeded in its aim to activate talk in the small group. The designed connection cards proved especially successful in activating talk. The connection cards are cards that represent the components of the electric circuit. In the teaching experiment the pupils constructed different connections with the connection cards and discussed, what kinds of DC-circuit phenomena would take place in the corresponding real connections. The talk of the small group was analysed by comparing two situations, firstly, when the small group discussed using connections made with the connection cards and secondly with the same connections using real components. According to the results the talk of the small group included more higher-order thinking when using the connection cards than with similar real components. In order to answer the second sub question concerning the small group s external representations that appeared in the talk during the teaching experiment; student talk was visualised by the fragment maps which incorporate the electric circuit, the electric current and the source voltage. The fragment maps represent the gradual development of the external representations of DC-circuit phenomena in the small group during the teaching experiment. The results of the study challenge the results of previous research into the abstractness and difficulty of electricity concepts. According to this research, the external representations of DC-circuit phenomena clearly developed in the small group of third graders. Furthermore, the fragment maps uncover that although the theoretical explanations of DC-circuit phenomena, which have been obtained as results of typical mental model studies, remain undeveloped, learning at the qualitative level of understanding does take place.
Resumo:
Wood is an important material for the construction and pulping industries. Using x-ray diffraction the microfibril angle of Sitka spruce wood was studied in the first part of this thesis. Sitka spruce (Picea sitchensis [Bong.] Carr.) is native to the west coast of North America, but due to its fast growth rate, it has also been imported to Europe. So far, its nanometre scale properties have not been systematically characterised. In this thesis the microfibril angle of Sitka spruce was shown to depend significantly on the origin of the tree in the first annual rings near the pith. Wood can be further processed to separate lignin from cellulose and hemicelluloses. Solid cellulose can act as a reducer for metal ions and it is also a porous support for nanoparticles. By chemically reducing nickel or copper in the solid cellulose support it is possible to get small nanoparticles on the surfaces of the cellulose fibres. Cellulose supported metal nanoparticles can potentially be used as environmentally friendly catalysts in organic chemistry reactions. In this thesis the size of the nickel and copper containing nanoparticles were studied using anomalous small-angle x-ray scattering and wide-angle x-ray scattering. The anomalous small-angle x-ray scattering experiments showed that the crystallite size of the copper oxide nanoparticles was the same as the size of the nanoparticles, so the nanoparticles were single crystals. The nickel containing nanoparticles were amorphous, but crystallised upon heating. The size of the nanoparticles was observed to be smaller when the reduction of nickel was done in aqueous ammonium hydrate medium compared to reduction made in aqueous solution. Lignin is typically seen as the side-product of wood industries. Lignin is the second most abundant natural polymer on Earth, and it possesses potential to be a useful material for many purposes in addition to being an energy source for the pulp mills. In this thesis, the morphology of several lignins, which were produced by different separation methods from wood, was studied using small-angle and ultra small-angle x-ray scattering. It was shown that the fractal model previously proposed for the lignin structure does not apply to most of the extracted lignin types. The only lignin to which the fractal model could be applied was kraft lignin. In aqueous solutions the average shape of the low molar mass kraft lignin particles was observed to be elongated and flat. The average shape does not necessarily correspond to the shape of the individual particles because of the polydispersity of the fraction and due to selfassociation of the particles. Lignins, and especially lignosulfonate, have many uses as dispersants, binders and emulsion stabilisers. In this thesis work the selfassociation of low molar mass lignosulfonate macromolecules was observed using small-angle x-ray scattering. By taking into account the polydispersity of the studied lignosulfonate fraction, the shape of the lignosulfonate particles was determined to be flat by fitting an oblate ellipsoidal model to the scattering intensity.
Resumo:
Marja Heinonen s dissertation Verkkomedian käyttö ja tutkiminen. Iltalehti Online 1995-2001 describes the usage of new internet based news service Iltalehti Online during its first years of existence, 1995-2001. The study focuses on the content of the service and users attitudes towards the new media and its contents. Heinonen has also analyzed and described the research methods that can be used in the research of any new media phenomenon when there is no historical perspective to do the research. Heinonen has created a process model for the research of net medium, which is based on a multidimensional approach. She has chosen an iterative research method inspired by Sudweeks and Simoff s CEDA-methodology in which qualitative and quantitative methods take turns both creating results and new research questions. The dissertation discusses and describes the possibilities of combining several research methods in the study of online news media. On general level it discusses the methodological possibilities of researching a completely new media form when there is no historical perspective. The result of these discussions is in favour for the multidimensional methods. The empiric research was built around three cases of Iltalehti Online among its users: log analysis 1996-1999, interviews 1999 and clustering 2000-2001. Even though the results of different cases were somewhat conflicting here are the central results from the analysis of Iltalehti Online 1995-2001: - Reading was strongly determined by the gender. - The structure of Iltalehti Online guided the reading strongly. - People did not make a clear distinction in content between news and entertainment. - Users created new habits in their everyday life during the first years of using Iltalehti Online. These habits were categorized as follows: - break between everyday routines - established habit - new practice within the rhythm of the day - In the clustering of the users sports, culture and celebrities were the most distinguishing contents. Users did not move across these borders as much as within them. The dissertation gives contribution to the development of multidimensional research methods in the field of emerging phenomena in media field. It is also a unique description of a phase of development in media history through an unique research material. There is no such information (logs + demographics) available of any other Finnish online news media. Either from the first years or today.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
Doctoral dissertation work in sociology examines how human heredity became a scientific, political and a personal issue in the 20th century Finland. The study focuses on the institutionalisation of rationales and technologies concerning heredity, in the context of Finnish medicine and health care. The analysis concentrates specifically on the introduction and development of prenatal screening within maternity care. The data comprises of medical articles, policy documents and committee reports, as well as popular guidebooks and health magazines. The study commences with an analysis on the early 20th century discussions on racial hygiene. It ends with an analysis on the choices given to pregnant mothers and families at present. Freedom to choose, considered by geneticists and many others as a guarantee of the ethicality of medical applications, is presented in this study as a historically, politically and scientifically constructed issue. New medical testing methods have generated new possibilities of governing life itself. However, they have also created new ethical problems. Leaning on recent historical data, the study illustrates how medical risk rationales on heredity have been asserted by the medical profession into Finnish health care. It also depicts medical professions ambivalence between maintaining the patients autonomy and utilizing for example prenatal testing according to health policy interests. Personalized risk is discussed as a result of the empirical analysis. It is indicated that increasing risk awareness amongst the public, as well as offering choices, have had unintended consequences. According to doctors, present day parents often want to control risks more than what is considered justified or acceptable. People s hopes to anticipate the health and normality of their future children have exceeded the limits offered by medicine. Individualization of the government of heredity is closely linked to a process that is termed as depolitization. The concept refers to disembedding of medical genetics from its social contexts. Prenatal screening is regarded to be based on individual choice facilitated by neutral medical knowledge. However, prenatal screening within maternity care also has its basis in health policy aims and economical calculations. Methodological basis of the study lies in Michel Foucault s writings on the history of thought, as well as in science and technology studies.
Resumo:
Increasing antimicrobial resistance in bacteria has led to the need for better understanding of antimicrobial usage patterns. In 1999, the World Organisation for Animal Health (OIE) recommended that an international ad hoc group should be established to address human and animal health risks related to antimicrobial resistance and the contribution of antimicrobial usage in veterinary medicine. In European countries the need for continuous recording of the usage of veterinary antimicrobials as well as for animal species-specific and indication-based data on usage has been acknowledged. Finland has been among the first countries to develop prudent use guidelines in veterinary medicine, as the Ministry of Agriculture and Forestry issued the first animal species-specific indication-based recommendations for antimicrobial use in animals in 1996. These guidelines have been revised in 2003 and 2009. However, surveillance on the species-specific use of antimicrobials in animals has not been performed in Finland. This thesis provides animal species-specific information on indication-based antimicrobial usage. Different methods for data collection have been utilized. Information on antimicrobial usage in animals has been gathered in four studies (studies A-D). Material from studies A, B and C have been used in an overlapping manner in the original publications I-IV. Study A (original publications I & IV) presents a retrospective cross-sectional survey on prescriptions for small animals at the Veterinary Teaching Hospital of the University of Helsinki. Prescriptions for antimicrobial agents (n = 2281) were collected and usage patterns, such as the indication and length of treatment, were reviewed. Most of the prescriptions were for dogs (78%), and primarily for the treatment of skin and ear infections most of which were treated with cephalexin for a median period of 14 days. Prescriptions for cats (18%) were most often for the treatment of urinary tract infections with amoxicillin for a median length of 10 days. Study B (original publication II) was a retrospective cross-sectional survey where prescriptions for animals were collected from 17 University Pharmacies nationwide. Antimicrobial prescriptions (n = 1038) for mainly dogs (65%) and cats (19%) were investigated. In this study, cephalexin and amoxicillin were also the most frequently used drugs for dogs and cats, respectively. In study C (original publications III & IV), the indication-based usage of antimicrobials of practicing veterinarians was analyzed by using a prospective questionnaire. Randomly selected practicing veterinarians in Finland (n = 262) recorded all their antimicrobial usage during a 7-day study period. Cattle (46%) with mastitis were the most common patients receiving antimicrobial treatment, generally intramuscular penicillin G or intramammary treatment with ampicillin and cloxacillin. The median length of treatment was four days, regardless of the route of administration. Antimicrobial use in horses was evaluated in study D, the results of which are previously unpublished. Firstly, data collected with the prospective questionnaire from the practicing veterinarians showed that horses (n = 89) were frequently treated for skin or wound infections by using penicillin G or trimethoprim-sulfadiazine. The mean duration of treatment was five to seven days. Secondly, according to retrospective data collected from patient records, horses (n = 74) that underwent colic surgery at the Veterinary Teaching Hospital of the University of Helsinki were generally treated according to national and hospital recommendations; penicillin G and gentamicin was administered preoperatively and treatment was continued for a median of three days postoperatively. In conclusion, Finnish veterinarians followed well the national prudent use guidelines. Narrow-spectrum antimicrobials were preferred and, for instance, fluoroquinolones were used sparingly. Prescription studies seemed to give good information on antimicrobials usage, especially when combined with complementary information from patient records. A prospective questionnaire study provided a fair amount of valuable data on several animal species. Electronic surveys are worthwhile exploiting in the future.
Resumo:
The starting point of this study was to find out how the historical consciousness manifest in conceptions and experiences of Chilean refugees and their descendants. The previous research of historical consciousness has shown that powerful experiences such as the revolution and being a refugee may have an effect on historical consciousness. The purpose of this study is to solve how those experiences in the past have influenced Chilean refugees and their descendant s interpretations of the present and expectations for the future. The research material was collected by interviewing four Chilean refugees that escaped to Finland in years 1973 1976 and four young adults who represent the second generation. All second generation interviewees were born in Finland and their other parent or both parents were Chilean refugees. The two groups were not in a family relation to each other. The empirical part of the research was made by qualitative methods. The research material was collected by the method of focused interview and it was analysed by the qualitative data analysis software Atlas.ti 6.0. Content analysis was the main research tool. The previous theory of historical consciousness and the study questions was used to create the seven categories that manifest historical consciousness. The seven categories were biographical memory, collective memory, experiences of living between two cultures, idea of man, the essence of history and the reason for living, value conceptions and expectations of the future. Content analysis was based on those categories. Subcategories were based on the research material and were created during the analysis. The results of this study were made up of categories. The study revealed that experiences of revolution and of being a refugee has a significant role in the historical consciousness of the Chilean refugees. It became evident in their biographical memory being separated in three parts, in their values and in the belief of possibility of an individual to govern her own life. The second generation was also exposed to their parent s experiences in the past. The collective trauma in their parent s past has been part of their life indirectly and has affected the way they think of themselves, their concepts and their place in the present world. The active and regular retrospection in Finland by Chilean adults and special Gabriela Mistral club activities has played a big part in the construction of their historical consciousness.