13 resultados para automated knowledge visualization

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this dissertation was to explore how different types of prior knowledge influence student achievement and how different assessment methods influence the observed effect of prior knowledge. The project started by creating a model of prior knowledge which was tested in various science disciplines. Study I explored the contribution of different components of prior knowledge on student achievement in two different mathematics courses. The results showed that the procedural knowledge components which require higher-order cognitive skills predicted the final grades best and were also highly related to previous study success. The same pattern regarding the influence of prior knowledge was also seen in Study III which was a longitudinal study of the accumulation of prior knowledge in the context of pharmacy. The study analysed how prior knowledge from previous courses was related to student achievement in the target course. The results implied that students who possessed higher-level prior knowledge, that is, procedural knowledge, from previous courses also obtained higher grades in the more advanced target course. Study IV explored the impact of different types of prior knowledge on students’ readiness to drop out from the course, on the pace of completing the course and on the final grade. The study was conducted in the context of chemistry. The results revealed again that students who performed well in the procedural prior-knowledge tasks were also likely to complete the course in pre-scheduled time and get higher final grades. On the other hand, students whose performance was weak in the procedural prior-knowledge tasks were more likely to drop out or take a longer time to complete the course. Study II explored the issue of prior knowledge from another perspective. Study II aimed to analyse the interrelations between academic self-beliefs, prior knowledge and student achievement in the context of mathematics. The results revealed that prior knowledge was more predictive of student achievement than were other variables included in the study. Self-beliefs were also strongly related to student achievement, but the predictive power of prior knowledge overruled the influence of self-beliefs when they were included in the same model. There was also a strong correlation between academic self-beliefs and prior-knowledge performance. The results of all the four studies were consistent with each other indicating that the model of prior knowledge may be used as a potential tool for prior knowledge assessment. It is useful to make a distinction between different types of prior knowledge in assessment since the type of prior knowledge students possess appears to make a difference. The results implied that there indeed is variation between students’ prior knowledge and academic self-beliefs which influences student achievement. This should be taken into account in instruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines supervisors' emerging new role in a technical customer service and home customers division of a large Finnish telecommunications corporation. Data of the study comes from a second-generation knowledge management project, an intervention research, which was conducted for supervisors of the division. The study exemplifies how supervision work is transforming in high technology organization characterized with high speed of change in technologies, products, and in grass root work practices. The intervention research was conducted in the division during spring 2000. Primary analyzed data consists of six two-hour videorecorded intervention sessions. Unit of analysis has been collective learningactions. Researcher has first written conversation transcripts out of the video-recorded meetings and then analyzed this qualitative data using analytical schema based on collective learning actions. Supervisors' role is conceptualized as an actor of a collective and dynamic activity system, based on the ideas from cultural historical activity theory. On knowledge management researcher has takena second-generation knowledge management viewpoint, following ideas fromcultural historical activity theory and developmental work research. Second-generation knowledge management considers knowledge embedded and constructed in collective practices, such as innovation networks or communities of practice (supervisors' work community), which have the capacity to create new knowledge. Analysis and illustration of supervisors' emerging new role is conceptualized in this framework using methodological ideas derived from activity theory and developmental work research. Major findings of the study show that supervisors' emerging new role in a high technology telecommunication organization characterized with high speed of discontinuous change in technologies, products, and in grass-root practices cannot be defined or characterized using a normative management role/model. Their role is expanding two-dimensionally, (1) socially and (2) in new knowledge, and work practices. The expansion in organization and inter-organizational network (social expansion) causes pressures to manage a network of co-operation partners and subordinates. On the other hand, the faster speed of change in technological solutions, new products, and novel customer wants (expansion in knowledge) causes pressures for supervisors to innovate quickly new work practices to manage this change. Keywords: Activity theory, knowledge management, developmental work research, supervisors, high technology organizations, telecommunication organizations, second-generation knowledge management, competence laboratory, intervention research, learning actions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkielma käsittelee nykyisiä kognitiotieteen teorioita käsitteistä ja niiden mallintamista oliokeskeisillä tietämyksen esittämisen menetelmillä. Käsiteteorioista käsitellään klassinen, määritelmäteoria, prototyyppiteoria, duaaliteoriat, uusklassinen teoria, teoria-teoria ja atomistinen teoria. Oliokeskeiset menetelmät ovat viime aikoina jakautuneet kahden tyyppisiin kieliin: oliopohjaisiin ja luokkapohjaisiin. Uudet olio-pohjaiset olio-ohjelmointikielet antavat käsitteiden representointiin mahdollisuuksia, jotka puuttuvat aikaisemmista luokka-pohjaisista kielistä ja myös kehysmenetelmistä. Tutkielma osoittaa, että oliopohjaisten kielten uudet piirteet tarjoavat keinoja, joilla käsitteitä voidaan esittää symbolisessa muodossa paremmin kuin perinteisillä menetelmillä. Niillä pystytään simuloimaan kaikkea mitä luokkapohjaisilla kielillä voidaan, mutta ne pystyvät lisäksi simuloimaan perheyhtäläisyyskäsitteitä ja mahdollistavat olioiden dynaamisen muuttamisen ilman, että siinä rikotaan psykologisen essentialismin periaatetta. Tutkielma osoittaa lisäksi vakavia puutteitta, jotka koskevat koko oliokeskeistä menetelmää. Avainsanat: käsitteet, käsiteteoriat, tekoäly, komputationaalinen psykologia, olio-ohjelmointi, tiedon esittäminen

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present study addressed the epistemology of teachers’ practical knowledge. Drawing from the literature, teachers’ practical knowledge is defined as all teachers’ cognitions (e.g., beliefs, values, motives, procedural knowing, and declarative knowledge) that guide their practice of teaching. The teachers’ reasoning that lies behind their practical knowledge is addressed to gain insight into its epistemic nature. I studied six class teachers’ practical knowledge; they teach in the metropolitan region of Helsinki. Relying on the assumptions of the phenomenographic inquiry, I collected and analyzed the data. I analyzed the data in two stages where the first stage involved an abductive procedure, and the second stage an inductive procedure for interpretation, and thus developed the system of categories. In the end, a quantitative analysis nested into the qualitative findings to study the patterns of the teachers’’ reasoning. The results indicated that teachers justified their practical knowledge based on morality and efficiency of action; efficiency of action was found to be presented in two different ways: authentic efficiency and naïve efficiency. The epistemic weight of morality was embedded in what I call “moral care”. The core intention of teachers in the moral care was the commitment that they felt about the “whole character” of students. From this perspective the “dignity” and the moral character of the students should not replaced for any other “instrumental price”. “Caring pedagogy” was the epistemic value of teachers’ reasoning in the authentic efficiency. The central idea in the caring pedagogy was teachers’ intentions to improve the “intellectual properties” of “all or most” of the students using “flexible” and “diverse” pedagogies. However, “regulating pedagogy” was the epistemic condition of practice in the cases corresponding to naïve efficiency. Teachers argued that an effective practical knowledge should regulate and manage the classroom activities, but the targets of the practical knowledge were mainly other “issues “or a certain percentage of the students. In these cases, the teachers’ arguments were mainly based on the notion of “what worked” regardless of reflecting on “what did not work”. Drawing from the theoretical background and the data, teachers’ practical knowledge calls for “praxial knowledge” when they used the epistemic conditions of “caring pedagogy” and “moral care”. It however calls for “practicable” epistemic status when teachers use the epistemic condition of regulating pedagogy. As such, praxial knowledge with the dimensions of caring pedagogy and moral care represents the “normative” perspective on teachers’ practical knowledge, and thus reflects a higher epistemic status in comparison to “practicable” knowledge, which represents a “descriptive” perception toward teachers’ practical knowledge and teaching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The basic goal of a proteomic microchip is to achieve efficient and sensitive high throughput protein analyses, automatically carrying out several measurements in parallel. A protein microchip would either detect a single protein or a large set of proteins for diagnostic purposes, basic proteome or functional analysis. Such analyses would include e.g. interactomics, general protein expression studies, detecting structural alterations or secondary modifications. Visualization of the results may occur by simple immunoreactions, general or specific labelling, or mass spectrometry. For this purpose we have manufactured chip-based proteome analysis devices that utilize the classical polymer gel electrophoresis technology to run one and two-dimensional gel electrophoresis separations of proteins in just a smaller size. In total, we manufactured three functional prototypes of which one performed a miniaturized one-dimensional gel electrophoresis (1-DE) separation, the second and third preformed two-dimensional gel electrophoresis (2-DE) separations. These microchips were successfully used to separate and characterize a set of predefined standard proteins, cell and tissue samples. Also, the miniaturized 2-DE (ComPress-2DE) chip presents a novel way of combining the 1st and 2nd dimensional separations, thus avoiding manual handling of the gels, eliminate cross-contamination, and make analyses faster and repeatability better. They all showed the advantages of miniaturization over the commercial devices; such as fast analysis, low sample- and reagent consumption, high sensitivity, high repeatability and inexpensive performance. All these instruments have the potential to be fully automated due to their easy-to-use set-up.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current study of Scandinavian multinational corporate subsidiaries in the rapidly growing Eastern European market, due to their particular organizational structure, attempts to gain some new insights into processes and potential benefits of knowledge and technology transfer. This study explores how to succeed in knowledge transfer and to become more competitive, driven by the need to improve transfer of systematic knowledge for the manufacture of product and service provisions in newly entered market. The scope of current research is exactly limited to multinational corporations, which are defined as enterprises comprising entities in two or more countries, regardless of legal forms and field of activity of those entities, and which operate under a system of decision-making permitting coherent policies and a common strategy through one or more decision-making centers. The entities are linked, by ownership, and able to exercise influence over the activities of the others; and, in particular, to share the knowledge, resources, and responsibilities with others. The research question is "How and to which extent can knowledge-transfer influence a company's technological competence and economic competitiveness?" and try to find out what particular forces and factors affect the development of subsidiary competencies; what factors influence the corporate integration and use of the subsidiary's competencies; and what may increase competitiveness of MNC pursuing leading position in entered market. The empirical part of the research was based on qualitative analyses of twenty interviews conducted among employees in Scandinavian MNC subsidiary units situated in Ukraine, using structured sequence of questions with open-ended answers. The data was investigated by comparison case analyses to literature framework. Findings indicate that a technological competence developed in one subsidiary will lead to an integration of that competence with other corporate units within the MNC. Success increasingly depends upon people's learning. The local economic area is crucial for understanding competition and industrial performance, as there seems to be a clear link between the performance of subsidiaries and the conditions prevailing in their environment. The linkage between competitive advantage and company's success is mutually dependent. Observation suggests that companies can be characterized as clusters of complementary activities such as R&D, administration, marketing, manufacturing and distribution. Study identifies barriers and obstacles in technology and knowledge transfer that is relevant for the subsidiaries' competence development. The accumulated experience can be implemented in new entered market with simple procedures, and at a low cost under specific circumstances, by cloning. The main goal is focused to support company prosperity, making more profits and sustaining an increased market share by improved product quality and/or reduced production cost of the subsidiaries through cloning approach. Keywords: multinational corporation; technology transfer; knowledge transfer; subsidiary competence; barriers and obstacles; competitive advantage; Eastern European market

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents methods for locating and analyzing cis-regulatory DNA elements involved with the regulation of gene expression in multicellular organisms. The regulation of gene expression is carried out by the combined effort of several transcription factor proteins collectively binding the DNA on the cis-regulatory elements. Only sparse knowledge of the 'genetic code' of these elements exists today. An automatic tool for discovery of putative cis-regulatory elements could help their experimental analysis, which would result in a more detailed view of the cis-regulatory element structure and function. We have developed a computational model for the evolutionary conservation of cis-regulatory elements. The elements are modeled as evolutionarily conserved clusters of sequence-specific transcription factor binding sites. We give an efficient dynamic programming algorithm that locates the putative cis-regulatory elements and scores them according to the conservation model. A notable proportion of the high-scoring DNA sequences show transcriptional enhancer activity in transgenic mouse embryos. The conservation model includes four parameters whose optimal values are estimated with simulated annealing. With good parameter values the model discriminates well between the DNA sequences with evolutionarily conserved cis-regulatory elements and the DNA sequences that have evolved neutrally. In further inquiry, the set of highest scoring putative cis-regulatory elements were found to be sensitive to small variations in the parameter values. The statistical significance of the putative cis-regulatory elements is estimated with the Two Component Extreme Value Distribution. The p-values grade the conservation of the cis-regulatory elements above the neutral expectation. The parameter values for the distribution are estimated by simulating the neutral DNA evolution. The conservation of the transcription factor binding sites can be used in the upstream analysis of regulatory interactions. This approach may provide mechanistic insight to the transcription level data from, e.g., microarray experiments. Here we give a method to predict shared transcriptional regulators for a set of co-expressed genes. The EEL (Enhancer Element Locator) software implements the method for locating putative cis-regulatory elements. The software facilitates both interactive use and distributed batch processing. We have used it to analyze the non-coding regions around all human genes with respect to the orthologous regions in various other species including mouse. The data from these genome-wide analyzes is stored in a relational database which is used in the publicly available web services for upstream analysis and visualization of the putative cis-regulatory elements in the human genome.