29 resultados para Scientific apparatus and instruments
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
For the past twenty years, several indicator sets have been produced on international, national and regional levels. Most of the work has concentrated on the selection of the indicators and on collection of the pertinent data, but less attention has been given to the actual users and their needs. This dissertation focuses on the use of sustainable development indicator sets. The dissertation explores the reasons that have deterred the use of the indicators, discusses the role of sustainable development indicators in a policy-cycle and broadens the view of use by recognising three different types of use. The work presents two indicator development processes: The Finnish national sustainable development indicators and the socio-cultural indicators supporting the measurement of eco-efficiency in the Kymenlaakso Region. The sets are compared by using a framework created in this work to describe indicator process quality. It includes five principles supported by more specific criteria. The principles are high policy relevance, sound indicator quality, efficient participation, effective dissemination and long-term institutionalisation. The framework provided a way to identify the key obstacles for use. The two immediate problems with current indicator sets are that the users are unaware of them and the indicators are often unsuitable to their needs. The reasons for these major flaws are irrelevance of the indicators to the policy needs, technical shortcomings in the context and presentation, failure to engage the users in the development process, non-existent dissemination strategies and lack of institutionalisation to promote and update the indicators. The importance of the different obstacles differs among the users and use types. In addition to the indicator projects, materials used in the dissertation include 38 interviews of high-level policy-makers or civil servants close to them, statistics of the national indicator Internet-page downloads, citations of the national indicator publication, and the media coverage of both indicator sets. According to the results, the most likely use for a sustainable development indicator set by policy-makers is to learn about the concept. Very little evidence of direct use to support decision-making was available. Conceptual use is also common for other user groups, namely the media, civil servants, researchers, students and teachers. Decision-makers themselves consider the most obvious use for the indicators to be the promotion of their own views which is a form of legitimising use. The sustainable development indicators have different types of use in the policy cycle and most commonly expected instrumental use is not very likely or even desirable at all stages. Stages of persuading the public and the decision-makers about new problems as well as in formulating new policies employ legitimising use. Learning by conceptual use is also inherent to policy-making as people involved learn about the new situation. Instrumental use is most likely in policy formulation, implementation and evaluation. The dissertation is an article dissertation, including five papers that are published in scientific journals and an extensive introductory chapter that discusses and weaves together the papers.
Resumo:
In this thesis acceleration of energetic particles at collisionless shock waves in space plasmas is studied using numerical simulations, with an emphasis on physical conditions applicable to the solar corona. The thesis consists of four research articles and an introductory part that summarises the main findings reached in the articles and discusses them with respect to theory of diffusive shock acceleration and observations. This thesis gives a brief review of observational properties of solar energetic particles and discusses a few open questions that are currently under active research. For example, in a few large gradual solar energetic particle events the heavy ion abundance ratios and average charge states show characteristics at high energies that are typically associated with flare-accelerated particles, i.e. impulsive events. The role of flare-accelerated particles in these and other gradual events has been discussed a lot in the scientific community, and it has been questioned if and how the observed features can be explained in terms of diffusive shock acceleration at shock waves driven by coronal mass ejections. The most extreme solar energetic particle events are the so-called ground level enhancements where particle receive so high energies that they can penetrate all the way through Earth's atmosphere and increase radiation levels at the surface. It is not known what conditions are required for acceleration into GeV/nuc energies, and the presence of both very fast coronal mass ejections and X-class solar flares makes it difficult to determine what is the role of these two accelerators in ground level enhancements. The theory of diffusive shock acceleration is reviewed and its predictions discussed with respect to the observed particle characteristics. We discuss how shock waves can be modeled and describe in detail the numerical model developed by the author. The main part of this thesis consists of the four scientific articles that are based on results of the numerical shock acceleration model developed by the author. The novel feature of this model is that it can handle complex magnetic geometries which are found, for example, near active regions in the solar corona. We show that, according to our simulations, diffusive shock acceleration can explain the observed variations in abundance ratios and average charge states, provided that suitable seed particles and magnetic geometry are available for the acceleration process in the solar corona. We also derive an injection threshold for diffusive shock acceleration that agrees with our simulation results very well, and which is valid under weakly turbulent conditions. Finally, we show that diffusive shock acceleration can produce GeV/nuc energies under suitable coronal conditions, which include the presence of energetic seed particles, a favourable magnetic geometry, and an enhanced level of ambient turbulence.
Resumo:
Atmospheric aerosol particles have a strong impact on the global climate. A deep understanding of the physical and chemical processes affecting the atmospheric aerosol climate system is crucial in order to describe those processes properly in global climate models. Besides the climatic effects, aerosol particles can deteriorate e.g. visibility and human health. Nucleation is a fundamental step in atmospheric new particle formation. However, details of the atmospheric nucleation mechanisms have remained unresolved. The main reason for that has been the non-existence of instruments capable of measuring neutral newly formed particles in the size range below 3 nm in diameter. This thesis aims to extend the detectable particle size range towards close-to-molecular sizes (~1nm) of freshly nucleated clusters, and by direct measurement obtain the concentrations of sub-3 nm particles in atmospheric environment and in well defined laboratory conditions. In the work presented in this thesis, new methods and instruments for the sub-3 nm particle detection were developed and tested. The selected approach comprises four different condensation based techniques and one electrical detection scheme. All of them are capable to detect particles with diameters well below 3 nm, some even down to ~1 nm. The developed techniques and instruments were deployed in the field measurements as well as in laboratory nucleation experiments. Ambient air studies showed that in a boreal forest environment a persistent population of 1-2 nm particles or clusters exists. The observation was done using 4 different instruments showing a consistent capability for the direct measurement of the atmospheric nucleation. The results from the laboratory experiments showed that sulphuric acid is a key species in the atmospheric nucleation. The mismatch between the earlier laboratory data and ambient observations on the dependency of nucleation rate on sulphuric acid concentration was explained. The reason was shown to be associated in the inefficient growth of the nucleated clusters and in the insufficient detection efficiency of particle counters used in the previous experiments. Even though the exact molecular steps of nucleation still remain an open question, the instrumental techniques developed in this work as well as their application in laboratory and ambient studies opened a new view into atmospheric nucleation and prepared the way for investigating the nucleation processes with more suitable tools.
Resumo:
The first year at the university is critical in shaping the student s future academic development. Student integration has been shown to affect learning, motivation, persistence, and ultimately, graduation. Most importantly, however, integration affects how students academic expertise develops. In this study a social-psychological assumption was made: one cannot grow into academic expertise in isolation, without interaction with teachers and peers. Integration happens via engagement. In this research, social and academic integration among Finnish freshmen was studied. How much did freshmen interact with their teachers and peers; how interested did they think their teachers were in students; how committed did they feel; and how did they assess their own academic development? In addition to integration, students were asked about their identification with the university and the frequency of actual contacts with teachers and peers. Lastly, students personal epistemologies were studied to see if they were related to integration or frequency of contacts. The data was collected at the University of Helsinki in the autumn of 2001 and spring of 2002 at three faculties: the faculty of Social Sciences, Humanities and Science. In the autumn, 270 freshmen, and in the spring, 400 freshmen, completed the questionnaire. In addition to the cross-sectional data a longitudinal data was formed from 77 of the respondents. The results showed differences in how students were integrated. Freshmen at the faculty of Science were the least integrated whereas freshmen at the faculty of Humanities were the most integrated. Identification to the university was positively related to integration. The frequency of contacts with faculty and peers was positively related to integration and identification. A more developed personal epistemology was also positively related to integration and frequency of contacts. Differences were also found between the sexes in frequency of peer interaction and level of epistemology. This study has both theoretical and practical implications. Positive correlations between integration, identification, frequency of contacts and personal epistemology were found. The guiding assumption of the significance of social interaction was thus supported. The practical relevance of the study is for how teaching is carried out. In this data, over 50% of new university students at the end of their first year said they had never received feedback from an exam, never had a discussion with their teacher about a scientific topic, and had never discussed with a teacher how their studies were going.
Resumo:
Doctoral dissertation work in sociology examines how human heredity became a scientific, political and a personal issue in the 20th century Finland. The study focuses on the institutionalisation of rationales and technologies concerning heredity, in the context of Finnish medicine and health care. The analysis concentrates specifically on the introduction and development of prenatal screening within maternity care. The data comprises of medical articles, policy documents and committee reports, as well as popular guidebooks and health magazines. The study commences with an analysis on the early 20th century discussions on racial hygiene. It ends with an analysis on the choices given to pregnant mothers and families at present. Freedom to choose, considered by geneticists and many others as a guarantee of the ethicality of medical applications, is presented in this study as a historically, politically and scientifically constructed issue. New medical testing methods have generated new possibilities of governing life itself. However, they have also created new ethical problems. Leaning on recent historical data, the study illustrates how medical risk rationales on heredity have been asserted by the medical profession into Finnish health care. It also depicts medical professions ambivalence between maintaining the patients autonomy and utilizing for example prenatal testing according to health policy interests. Personalized risk is discussed as a result of the empirical analysis. It is indicated that increasing risk awareness amongst the public, as well as offering choices, have had unintended consequences. According to doctors, present day parents often want to control risks more than what is considered justified or acceptable. People s hopes to anticipate the health and normality of their future children have exceeded the limits offered by medicine. Individualization of the government of heredity is closely linked to a process that is termed as depolitization. The concept refers to disembedding of medical genetics from its social contexts. Prenatal screening is regarded to be based on individual choice facilitated by neutral medical knowledge. However, prenatal screening within maternity care also has its basis in health policy aims and economical calculations. Methodological basis of the study lies in Michel Foucault s writings on the history of thought, as well as in science and technology studies.
Resumo:
Higher education is faced with the challenge of strengthening students competencies for the constantly evolving technology-mediated practices of knowledge work. The knowledge creation approach to learning (Paavola et al., 2004; Hakkarainen et al., 2004) provides a theoretical tool to address learning and teaching organized around complex problems and the development of shared knowledge objects, such as reports, products, and new practices. As in professional work practices, it appears necessary to design sufficient open-endedness and complexity for students teamwork in order to generate unpredictable and both practically and epistemologically challenging situations. The studies of the thesis examine what kinds of practices are observed when student teams engage in knowledge creating inquiry processes, how the students themselves perceive the process, and how to facilitate inquiry with technology-mediation, tutoring, and pedagogical models. Overall, 20 student teams collaboration processes and productions were investigated in detail. This collaboration took place in teams or small groups of 3-6 students from multiple domain backgrounds. Two pedagogical models were employed to provide heuristic guidance for the inquiry processes: the progressive inquiry model and the distributed project model. Design-based research methodology was employed in combination with case study as the research design. Database materials from the courses virtual learning environment constituted the main body of data, with additional data from students self-reflections and student and teacher interviews. Study I examined the role of technology mediation and tutoring in directing students knowledge production in a progressive inquiry process. The research investigated how the scale of scaffolding related to the nature of knowledge produced and the deepening of the question explanation process. In Study II, the metaskills of knowledge-creating inquiry were explored as a challenge for higher education: metaskills refers to the individual, collective, and object-centered aspects of monitoring collaborative inquiry. Study III examined the design of two courses and how the elaboration of shared objects unfolded based on the two pedagogical models. Study IV examined how the arranged concept-development project for external customers promoted practices of distributed, partially virtual, project work, and how the students coped with the knowledge creation challenge. Overall, important indicators of knowledge creating inquiry were the following: new versions of knowledge objects and artifacts demonstrated a deepening inquiry process; and the various productions were co-created through iterations of negotiations, drafting, and versioning by the team members. Students faced challenges of establishing a collective commitment, devising practices to co-author and advance their reports, dealing with confusion, and managing culturally diverse teams. The progressive inquiry model, together with tutoring and technology, facilitated asking questions, generating explanations, and refocusing lines of inquiry. The involvement of the customers was observed to provide a strong motivation for the teams. On the evidence, providing team-specific guidance, exposing students to models of scientific argumentation and expert work practices, and furnishing templates for the intended products appear to be fruitful ways to enhance inquiry processes. At the institutional level, educators do well to explore ways of developing collaboration with external customers, public organizations or companies, and between educational units in order to enhance educational practices of knowledge creating inquiry.
Resumo:
The study examines the personnel training and research activities carried out by the Organization and Methods Division of the Ministry of Finance and their becoming a part and parcel of the state administration in 1943-1971. The study is a combination of institutional and ideological historical research in recent history on adult education, using a constructionist approach. Material salient to the study comes from the files of the Organization and Methods Division in the National Archives, parliamentary documents, committee reports, and the magazines. The concentrated training and research activities arranged by the Organization and Methods Division, became a part and parcel of the state administration in the midst of controversial challenges and opportunities. They served to solve social problems which beset the state administration as well as contextual challenges besetting rationalization measures, and organizational challenges. The activities were also affected by a dependence on decision-makers, administrative units, and civil servants organizations, by different views on rationalization and the holistic nature of reforms, as well as by the formal theories that served as resources. It chose long-term projects which extended to the political decision-makers and administrative units turf, and which were intended to reform the structures of the state administration and to rationalize the practices of the administrative units. The crucial questions emerged in opposite pairs (a constitutional state vs. the ideology of an administratively governed state, a system of national boards vs. a system of government through ministries, efficiency of work vs. pleasantness of work, centralized vs. decentralized rationalization activities) which were not solvable problems but impossible questions with no ultimate answers. The aim and intent of the rationalization of the state administration (the reform of the central, provincial, and local governments) was to facilitate integrated management and to render a greater amount of work by approaching management procedures scientifically and by clarifying administrative instances and their respon-sibilities in regards to each other. The means resorted to were organizational studies and committee work. In the rationalization of office work and finance control, the idea was to effect savings in administrative costs and to pare down those costs as well as to rationalize and heighten those functions by developing the institution of work study practitioners in order to coordinate employer and employee relationships and benefits (the training of work study practitioners, work study, and a two-tier work study practitioner organization). A major part of the training meant teaching and implementing leadership skills in practice, which, in turn, meant that the learning environment was the genuine work community and efforts to change it. In office rationalization, the solution to regulate the relations between the employer and the employees was the co-existence of the technical and biological rationalization and the human resource administration and the accounting and planning systems at the turn of the 1960s and 1970s. The former were based on the school of scientific management and human relations, the latter on system thinking, which was a combination of the former two. In the rationalization of the state administration, efforts were made to find solutions to stabilize management ideologies and to arrange the relationships of administrative systems in administrative science - among other things, in the Hoover Committee and the Simon decision making theory, and, in the 1960s, in system thinking. Despite the development-related vocabulary, the practical work was advanced rationalization. It was said that the practical activities of both the state administration and the administrative units depended on professional managers who saw to production results and human relations. The pedagogic experts hired to develop training came up with a training system, based on the training-technological model where the training was made a function of its own. The State Training Center was established and the training office of the Organization and Methods Division became the leader and coordinator of personnel training.
Resumo:
Abstract (Teaching in research ethics): The aim of this paper is to discuss teaching in research ethics. According to the guidelines issued by the National Advisory Board on Research Ethics in Finland (2002) the units providing researcher training have a duty to include good scientific practice and research ethics in this training. Various kinds of materials are needed in teaching in research ethics. One of them is fiction, which has appeared to be helpful in discussions of ethic problems. A number of examples taken from Finnish and Swedish fiction are discussed by referring to the above mentioned guidelines. The presentation is based on a chiasm, i.e. it goes from good scientific practice to fiction and further from fiction to teaching in research ethics.
Resumo:
The purpose of this study was to deepen the understanding of market segmentation theory by studying the evolution of the concept and by identifying the antecedents and consequences of the theory. The research method was influenced by content analysis and meta-analysis. The evolution of market segmentation theory was studied as a reflection of evolution of marketing theory. According to this study, the theory of market segmentation has its roots in microeconomics and it has been influenced by different disciplines, such as motivation research and buyer behaviour theory. Furthermore, this study suggests that the evolution of market segmentation theory can be divided into four major eras: the era of foundations, development and blossoming, stillness and stagnation, and the era of re-emergence. Market segmentation theory emerged in the mid-1950’s and flourished during the period between mid-1950’s and the late 1970’s. During the 1980’s the theory lost its interest in the scientific community and no significant contributions were made. Now, towards the dawn of the new millennium, new approaches have emerged and market segmentation has gained new attention.
Resumo:
Väärinkäytettyjen aineiden seulontaan käytetyn menetelmän tulee olla herkkä, selektiivinen, yksinkertainen, nopea ja toistettava. Työn tavoitteena oli kehittää yksinkertainen, mutta herkkä, esikäsittelymenetelmä bentsodiatsepiinien ja amfetamiinijohdannaisten kvalitatiiviseen seulomiseen virtsasta mikropilarisähkösumutussirun (μPESI) avulla, mikä tarjoaisi vaihtoehdon seulonnassa käytetyille immunologisille menetelmille, joiden herkkyys ja selektiivisyys ovat puutteellisia. Tavoitteena oli samalla tarkastella mikropilarisähkösumutussirun toimivuutta biologisten näytteiden analyysissa. Esikäsittely optimoitiin erikseen bentsodiatsepiineille ja amfetamiinijohdannaisille. Käytettyjä esikäsittelymenetelmiä olivat neste-nesteuutto, kiinteäfaasiuutto Oasis HLB-patruunalla ja ZipTip®-pipetinkärjellä sekä laimennus ja suodatus ilman uuttoa. Mittausten perusteella keskityttiin optimoimaan ZipTip®-uuttoa. Optimoinnissa tutkittavia yhdisteitä spiikattiin 0-virtsaan niiden ennaltamääritetyn raja-arvon verran, bentsodiatsepiineja 200 ng/ml ja amfetamiinijohdannaisia 300 ng/ml. Bentsodiatsepiinien kohdalla optimoitiin kutakin uuton vaihetta ja optimoinnin tuloksena näytteen pH säädettiin arvoon 5, faasi kunnostettiin asetonitriililla, tasapainotettiin ja pestiin veden (pH 5) ja asetonitriilin (10 % v/v) seoksella ja eluoitiin asetonitriilin, muurahaishapon ja veden (95:1:4 v/v/v) seoksella. Amfetamiinijohdannaisten uutossa optimoitiin näytteen ja liuottimien pH-arvoja ja tuloksena näytteen pH säädettiin arvoon 10, faasi kunnostettiin veden ja ammoniumvetykarbonaatin(pH 10, 1:1 v/v) seoksella, tasapainotettiin ja pestiin asetonitriilin ja veden (1:5 v/v) seoksella ja eluoitiin metanolilla. Optimoituja uuttoja testattiin Yhtyneet Medix Laboratorioista toimitetuilla autenttisilla virtsanäytteillä ja saatuja tuloksia verrattiin kvantitatiivisen GC/MS-analyysin tuloksiin. Bentsodiatsepiininäytteet hydrolysoitiin ennen uuttoa herkkyyden parantamiseksi. Autenttiset näytteet analysoitiin Q-TOF-laitteella Viikissä. Lisäksi hydrolysoidut bentsodiatsepiininäytteet mitattiin Yhtyneet Medix Laboratorioiden TOF-laitteella. Kehitetty menetelmä vaatii tulosten perusteella lisää optimointia toimiakseen. Ongelmana oli etenkin toistoissa ilmennyt tulosten hajonta. Manuaalista näytteensyöttöä tulisi kehittää toistettavammaksi. Autenttisten bentsodiatsepiininäytteiden analyysissa ongelmana olivat virheelliset negatiiviset tulokset ja amfetamiinijohdannaisten analyysissa virheelliset positiiviset tulokset. Virheellisiä negatiivisia tuloksia selittää menetelmän herkkyyden puute ja virheellisiä positiivisia tuloksia mittalaitteen, sirujen tai liuottimien likaantuminen.
Resumo:
The National Curriculum Guidelines on Early Childhood Education and Care (ECEC) in Finland says that ECEC is developed holistically through observing children´s and the educator community´s activities and the ECEC environment. The background of this research was that assesment should be based on commonly agreed principles, which are recorded e.g. to unit-specific ECEC curriculum. The objective of this research was to investigate how unit-specific ECEC curriculums have descriped the physical indoor environment in day-care centres. According to the National Curriculum Guidelines on ECEC, there are four ways of acting that are peculiar to children: playing, physical activities, exploration and artistic experiences and self-expression. The descriptions of physical environment in unit-spesific curriculums were observed through above mentioned four ways of acting. In addition to that, the descriptions of four ways of acting were compared to each other, in order to find out, which are the main differencies and similarities in relation to physical ECEC environment. Research material was build on unit-specific ECEC curriculums from 18 day-care centres of Helsinki. Target of the research were the descriptions of physical indoor environment in curriculums.The method used in the research was theory-guided content analysis. The analyses were mainly qualitative. The descriptions of psysical environment varied widely both quantitatively and by substance. All curriculums contained mentions of playing and artistic experiences and self-expression, but mentions of physical activities and exploration were noticiably fewer. All four ways of acting were mentioned in research material in relation to premises and instruments. Also, principles related to the use of premises and instruments and other more common priciples were mentioned in relation to all ways of acting. Instead of that, children were not mentioned even once as an upholders or innovators of physical activities environment and children were mentioned only once regarding to exploration environment. All ways of acting included scenarios of e.g. that environment must provide possibilities of particular way of acting, and both materials and instruments must be available for children. Anyhow, research material did not include any principle or scenario that relates to physical environment that would have occurred in every unit-specific curriculum.