14 resultados para Multiresolution Visualization
em Helda - Digital Repository of University of Helsinki
Resumo:
Information visualization is a process of constructing a visual presentation of abstract quantitative data. The characteristics of visual perception enable humans to recognize patterns, trends and anomalies inherent in the data with little effort in a visual display. Such properties of the data are likely to be missed in a purely text-based presentation. Visualizations are therefore widely used in contemporary business decision support systems. Visual user interfaces called dashboards are tools for reporting the status of a company and its business environment to facilitate business intelligence (BI) and performance management activities. In this study, we examine the research on the principles of human visual perception and information visualization as well as the application of visualization in a business decision support system. A review of current BI software products reveals that the visualizations included in them are often quite ineffective in communicating important information. Based on the principles of visual perception and information visualization, we summarize a set of design guidelines for creating effective visual reporting interfaces.
Resumo:
The basic goal of a proteomic microchip is to achieve efficient and sensitive high throughput protein analyses, automatically carrying out several measurements in parallel. A protein microchip would either detect a single protein or a large set of proteins for diagnostic purposes, basic proteome or functional analysis. Such analyses would include e.g. interactomics, general protein expression studies, detecting structural alterations or secondary modifications. Visualization of the results may occur by simple immunoreactions, general or specific labelling, or mass spectrometry. For this purpose we have manufactured chip-based proteome analysis devices that utilize the classical polymer gel electrophoresis technology to run one and two-dimensional gel electrophoresis separations of proteins in just a smaller size. In total, we manufactured three functional prototypes of which one performed a miniaturized one-dimensional gel electrophoresis (1-DE) separation, the second and third preformed two-dimensional gel electrophoresis (2-DE) separations. These microchips were successfully used to separate and characterize a set of predefined standard proteins, cell and tissue samples. Also, the miniaturized 2-DE (ComPress-2DE) chip presents a novel way of combining the 1st and 2nd dimensional separations, thus avoiding manual handling of the gels, eliminate cross-contamination, and make analyses faster and repeatability better. They all showed the advantages of miniaturization over the commercial devices; such as fast analysis, low sample- and reagent consumption, high sensitivity, high repeatability and inexpensive performance. All these instruments have the potential to be fully automated due to their easy-to-use set-up.
Resumo:
Remote sensing provides methods to infer land cover information over large geographical areas at a variety of spatial and temporal resolutions. Land cover is input data for a range of environmental models and information on land cover dynamics is required for monitoring the implications of global change. Such data are also essential in support of environmental management and policymaking. Boreal forests are a key component of the global climate and a major sink of carbon. The northern latitudes are expected to experience a disproportionate and rapid warming, which can have a major impact on vegetation at forest limits. This thesis examines the use of optical remote sensing for estimating aboveground biomass, leaf area index (LAI), tree cover and tree height in the boreal forests and tundra taiga transition zone in Finland. The continuous fields of forest attributes are required, for example, to improve the mapping of forest extent. The thesis focus on studying the feasibility of satellite data at multiple spatial resolutions, assessing the potential of multispectral, -angular and -temporal information, and provides regional evaluation for global land cover data. Preprocessed ASTER, MISR and MODIS products are the principal satellite data. The reference data consist of field measurements, forest inventory data and fine resolution land cover maps. Fine resolution studies demonstrate how statistical relationships between biomass and satellite data are relatively strong in single species and low biomass mountain birch forests in comparison to higher biomass coniferous stands. The combination of forest stand data and fine resolution ASTER images provides a method for biomass estimation using medium resolution MODIS data. The multiangular data improve the accuracy of land cover mapping in the sparsely forested tundra taiga transition zone, particularly in mires. Similarly, multitemporal data improve the accuracy of coarse resolution tree cover estimates in comparison to single date data. Furthermore, the peak of the growing season is not necessarily the optimal time for land cover mapping in the northern boreal regions. The evaluated coarse resolution land cover data sets have considerable shortcomings in northernmost Finland and should be used with caution in similar regions. The quantitative reference data and upscaling methods for integrating multiresolution data are required for calibration of statistical models and evaluation of land cover data sets. The preprocessed image products have potential for wider use as they can considerably reduce the time and effort used for data processing.
Resumo:
This thesis presents methods for locating and analyzing cis-regulatory DNA elements involved with the regulation of gene expression in multicellular organisms. The regulation of gene expression is carried out by the combined effort of several transcription factor proteins collectively binding the DNA on the cis-regulatory elements. Only sparse knowledge of the 'genetic code' of these elements exists today. An automatic tool for discovery of putative cis-regulatory elements could help their experimental analysis, which would result in a more detailed view of the cis-regulatory element structure and function. We have developed a computational model for the evolutionary conservation of cis-regulatory elements. The elements are modeled as evolutionarily conserved clusters of sequence-specific transcription factor binding sites. We give an efficient dynamic programming algorithm that locates the putative cis-regulatory elements and scores them according to the conservation model. A notable proportion of the high-scoring DNA sequences show transcriptional enhancer activity in transgenic mouse embryos. The conservation model includes four parameters whose optimal values are estimated with simulated annealing. With good parameter values the model discriminates well between the DNA sequences with evolutionarily conserved cis-regulatory elements and the DNA sequences that have evolved neutrally. In further inquiry, the set of highest scoring putative cis-regulatory elements were found to be sensitive to small variations in the parameter values. The statistical significance of the putative cis-regulatory elements is estimated with the Two Component Extreme Value Distribution. The p-values grade the conservation of the cis-regulatory elements above the neutral expectation. The parameter values for the distribution are estimated by simulating the neutral DNA evolution. The conservation of the transcription factor binding sites can be used in the upstream analysis of regulatory interactions. This approach may provide mechanistic insight to the transcription level data from, e.g., microarray experiments. Here we give a method to predict shared transcriptional regulators for a set of co-expressed genes. The EEL (Enhancer Element Locator) software implements the method for locating putative cis-regulatory elements. The software facilitates both interactive use and distributed batch processing. We have used it to analyze the non-coding regions around all human genes with respect to the orthologous regions in various other species including mouse. The data from these genome-wide analyzes is stored in a relational database which is used in the publicly available web services for upstream analysis and visualization of the putative cis-regulatory elements in the human genome.
Resumo:
Eukaryotic cells are characterized by having a subset of internal membrane compartments, each one with a specifi c identity, structure and function. Proteins destined to be targeted to the exterior of the cell need to enter and progress through the secretory pathway. Transport of secretory proteins from the endoplasmic reticulum (ER) to the Golgi takes place by the selective packaging of proteins into COPII-coated vesicles at the ER membrane. Taking advantage of the extensive genetic tools available for S. cerevisiae we found that Hsp150, a yeast secretory glycoprotein, selectively exited the ER in the absence of any of the three Sec24p family members. Sec24p has been thought to be an essential component of the COPII coat and thus indispensable for exocytic membrane traffic. Next we analyzed the ability of Hsp150 to be secreted in mutants, where post-Golgi transport is temperature sensitive. We found that Hsp150 could be selectively secreted under conditions where the exocyst component Sec15p is defective. Analysis of the secretory vesicles revealed that Hsp150 was packaged into a subset of known secretory vesicles as well as in a novel pool of secretory vesicles at the level of the Golgi. Secretion of Hsp150 in the absence of Sec15p function was dependent of Mso1p, a protein capable of interacting with vesicles intended to fuse with the plasma membrane, with the SNARE machinery and with Sec1p. This work demonstrated that Hsp150 is capable of using alternative secretory pathways in ER-to-Golgi and Golgi-to-plasma membrane traffi c. The sorting signals, used at both stages of the secretory pathway, for secretion of Hsp150 were different, revealing the highly dynamic nature and spatial organization of the secretory pathway. Foreign proteins usually misfold in the yeast ER. We used Hsp150 as a carrier to assist folding and transport of heterologous proteins though the secretory pathway to the culture medium in both S. cerevisiae and P. pastoris. Using this technique we expressed Hsp150Δ-HRP and developed a staining procedure, which allowed the visualization of the organelles of the secretory pathway of S. cerevisiae.
Resumo:
The overall aim of this dissertation was to study the public's preferences for forest regeneration fellings and field afforestations, as well as to find out the relations of these preferences to landscape management instructions, to ecological healthiness, and to the contemporary theories for predicting landscape preferences. This dissertation includes four case studies in Finland, each based on the visualization of management options and surveys. Guidelines for improving the visual quality of forest regeneration and field afforestation are given based on the case studies. The results show that forest regeneration can be connected to positive images and memories when the regeneration area is small and some time has passed since the felling. Preferences may not depend only on the management alternative itself but also on the viewing distance, viewing point, and the scene in which the management options are implemented. The current Finnish forest landscape management guidelines as well as the ecological healthiness of the studied options are to a large extent compatible with the public's preferences. However, there are some discrepancies. For example, the landscape management instructions as well as ecological hypotheses suggest that the retention trees need to be left in groups, whereas people usually prefer individually located retention trees to those trees in groups. Information and psycho-evolutionary theories provide some possible explanations for people's preferences for forest regeneration and field afforestation, but the results cannot be consistently explained by these theories. The preferences of the different stakeholder groups were very similar. However, the preference ratings of the groups that make their living from forest - forest owners and forest professionals - slightly differed from those of the others. These results provide support for the assumptions that preferences are largely consistent at least within one nation, but that knowledge and a reference group may also influence preferences.
Resumo:
Välikorvaleikkauksiin usein liittyvän välikorvan ja kuuloluuketjun kirurgisen rekonstruktion tavoitteena on luoda olosuhteet, jotka mahdollistavat hyvän kuulon sekä välikorvan säilymisen tulehduksettomana ja ilmapitoisena. Välikorvan rekonstruktiossa on käytetty implanttimateriaaleina perinteisesti potilaan omia kudoksia sekä tarvittaessa erilaisia hajoamattomia biomateriaaleja, mm. titaania ja silikonia. Ongelmana biomateriaalien käytössä voi olla bakteerien adherenssi eli tarttuminen vieraan materiaalin pintaan, mikä saattaa johtaa biofilmin muodostumiseen. Tämä voi aiheuttaa kroonisen, huonosti antibiootteihin reagoivan infektion kudoksessa, mikä usein käytännössä johtaa uusintaleikkaukseen ja implantin poistoon. Maitohappo- ja glykolihappopohjaiset biologisesti hajoavat polymeerit ovat olleet kliinisessä käytössä jo vuosikymmeniä. Niitä on käytetty erityisesti tukimateriaaleina mm. ortopediassa sekä kasvo- ja leukakirurgiassa. Niitä ei ole toistaiseksi käytetty välikorvakirurgiassa. Korvan kuvantamiseen käytetään ensisijaisesti tietokonetomografiaa (TT). TT-tutkimuksen ongelmana on potilaan altistuminen suhteellisen korkealle sädeannokselle, joka kasvaa kumulatiivisesti, jos kuvaus joudutaan toistamaan. Väitöskirjatyö selvittää uuden, aiemmin kliinisessä työssä rutiinisti lähinnä hampaiston ja kasvojen alueen kuvantamiseen käytetyn rajoitetun kartiokeila-TT:n soveltuvuutta korvan alueen kuvantamiseen. Väitöskirjan kahdessa ensimmäisessä osatyössä tutkittiin ja verrattiin kahden kroonisia ja postoperatiivisia korvainfektioita aiheuttavan bakteerin, Staphylococcus aureuksen ja Pseudomonas aeruginosan, in vitro adherenssia titaanin, silikonin ja kahden eri biohajoavan polymeerin (PLGA) pintaan. Lisäksi tutkittiin materiaalien albumiinipinnoituksen vaikutusta adherenssiin. Kolmannessa osatyössä tutkittiin eläinmallissa PLGA:n biokompatibiliteettia eli kudosyhteensopivuutta kokeellisessa välikorvakirurgiassa. Chinchillojen välikorviin istutettiin PLGA-materiaalia, eläimiä seurattiin, ja ne lopetettiin 6 kk:n kuluttua operaatiosta. Biokompatibiliteetin arviointi perustui kliinisiin havaintoihin sekä kudosnäytteisiin. Neljännessä osatyössä tutkittiin kartiokeila-TT:n soveltuvuutta korvan alueen kuvantamiseen vertaamalla sen tarkkuutta perinteisen spiraali-TT:n tarkkuuteen. Molemmilla laitteilla kuvattiin ohimo- eli temporaaliluita korvan alueen kliinisesti ja kirurgisesti tärkeiden rakenteiden kuvantumisen tarkkuuden arvioimiseksi. Viidennessä osatyössä arvioitiin myös operoitujen temporaaliluiden kuvantumista kartiokeila-TT:ssa. Bakteeritutkimuksissa PLGA-materiaalin pintaan tarttui keskimäärin korkeintaan saman verran tai vähemmän bakteereita kuin silikonin tai titaanin. Albumiinipinnoitus vähensi bakteeriadherenssia merkitsevästi kaikilla materiaaleilla. Eläinkokeiden perusteella PLGA todettiin hyvin siedetyksi välikorvassa. Korvakäytävissä tai välikorvissa ei todettu infektioita, tärykalvon perforaatioita tai materiaalin esiin työntymistä. Kudosnäytteissä näkyi lievää tulehdusreaktiota ja fibroosia implantin ympärillä. Temporaaliluutöissä rajoitettu kartiokeila-TT todettiin vähintään yhtä tarkaksi menetelmäksi kuin spiraali-TT välikorvan ja sisäkorvan rakenteiden kuvantamisessa, ja sen aiheuttama kertasäderasitus todettiin spiraali-TT:n vastaavaa huomattavasti vähäisemmäksi. Kartiokeila-TT soveltui hyvin välikorvaimplanttien ja postoperatiivisen korvan kuvantamiseen. Tulokset osoittavat, että PLGA on välikorvakirurgiaan soveltuva, turvallinen ja kudosyhteensopiva biomateriaali. Biomateriaalien pinnoittaminen albumiinilla vähentää merkittävästi bakteeriadherenssia niihin, mikä puoltaa pinnoituksen soveltamista implanttikirurgiassa. Kartiokeila-TT soveltuu korvan alueen kuvantamiseen. Sen tarkkuus kliinisesti tärkeiden rakenteiden osoittamisessa on vähintään yhtä hyvä ja sen potilaalle aiheuttama sädeannos pienempi kuin nykyisen korva-spiraali-TT:n. Tämä tekee menetelmästä spiraali-TT:aa potilasturvallisemman vaihtoehdon erityisesti, jos potilaan tilanne vaatii seurantaa ja useampia kuvauksia, ja jos halutaan kuvata rajoitettuja alueita uni- tai bilateraalisesti.
Resumo:
The purpose of this study was to evaluate the use of sentinel node biopsy (SNB) in the axillary nodal staging in breast cancer. A special interest was in sentinel node (SN) visualization, intraoperative detection of SN metastases, the feasibility of SNB in patients with pure tubular carcinoma (PTC) and in those with ductal carcinoma in situ (DCIS) in core needle biopsy (CNB) and additionally in the detection of axillary recurrences after tumour negative SNB. Patients and methods. 1580 clinically stage T1-T2 node-negative breast cancer patients, who underwent lymphoscintigraphy (LS), SNB and breast surgery between June 2000 - 2004 at the Breast Surgery Unit. The CNB samples were obtained from women, who participated the biennial, population based mammography screening at the Mammography Screening Centre of Helsinki 2001 - 2004.In the follow- up, a cohort of 205 patients who avoided AC due to negative SNB findings were evaluated using ultrasonography one and three years after breast surgery. Results. The visualization rate of axillary SNs was not enhanced by adjusting radioisotope doses according to BMI. The sensitivity of the intraoperative diagnosis of SN metastases of invasive lobular carcinoma (ILC) was higher, 87%, with rapid, intraoperative immunohistochemistry (IHC) group compared to 66% without it. The prevalence of tumour positive SN findings was 27% in the 33 patients with breast tumours diagnosed as PTC. The median histological tumour size was similar in patients with or without axillary metastases. After the histopathological review, six out of 27 patients with true PTC had axillary metastases, with no significant change in the risk factors for axillary metastases. Of the 67 patients with DCIS in the preoperative percutaneous biopsy specimen , 30% had invasion in the surgical specimen. The strongest predictive factor for invasion was the visibility of the lesion in ultrasound. In the three year follow-up, axillary recurrence was found in only two (0.5%) of the total of 383 ultrasound examinations performed during the study, and only one of the 369 examinations revealed cancer. None of the ultrasound examinations were false positive, and no study participant was subjected to unnecessary surgery due to ultrasound monitoring. Conclusions. Adjusting the dose of the radioactive tracer according to patient BMI does not increase the visualization rate of SNs. The intraoperative diagnosis of SN metastases is enhanced by rapid IHC particularly in patients with ILC. SNB seems to be a feasible method for axillary staging of pure tubular carcinoma in patients with a low prevalence of axillary metatastases. SNB also appears to be a sensible method in patients undergoing mastectomy due to DCIS in CNB. It also seems useful in patients with lesions visible in breast US. During follow-up, routine monitoring of the ipsilateral axilla using US is not worthwhile among breast cancer patients who avoided AC due to negative SN findings.
Resumo:
Conventional invasive coronary angiography is the clinical gold standard for detecting of coronary artery stenoses. Noninvasive multidetector computed tomography (MDCT) in combination with retrospective ECG gating has recently been shown to permit visualization of the coronary artery lumen and detection of coronary artery stenoses. Single photon emission tomography (SPECT) perfusion imaging has been considered the reference method for evaluation of nonviable myocardium, but magnetic resonance imaging (MRI) can accurately depict structure, function, effusion, and myocardial viability, with an overall capacity unmatched by any other single imaging modality. Magnetocardiography (MCG) provides noninvasively information about myocardial excitation propagation and repolarization without the use of electrodes. This evolving technique may be considered the magnetic equivalent to electrocardiography. The aim of the present series of studies was to evaluate changes in the myocardium assessed with SPECT and MRI caused by coronary artery disease, examine the capability of multidetector computed tomography coronary angiography (MDCT-CA) to detect significant stenoses in the coronary arteries, and MCG to assess remote myocardial infarctions. Our study showed that in severe, progressing coronary artery disease laser treatment does not improve global left ventricular function or myocardial perfusion, but it does preserve systolic wall thickening in fixed defects (scar). It also prevents changes from ischemic myocardial regions to scar. The MCG repolarization variables are informative in remote myocardial infarction, and may perform as well as the conventional QRS criteria in detection of healed myocardial infarction. These STT abnormalities are more pronounced in patients with Q-wave infarction than in patients with non-Q-wave infarctions. MDCT-CA had a sensitivity of 82%, a specificity of 94%, a positive predictive value of 79%, and a negative predictive value of 95% for stenoses over 50% in the main coronary arteries as compared with conventional coronary angiography in patients with known coronary artery disease. Left ventricular wall dysfunction, perfusion defects, and infarctions were detected in 50-78% of sectors assigned to calcifications or stenoses, but also in sectors supplied by normally perfused coronary arteries. Our study showed a low sensitivity (sensitivity 63%) in detecting obstructive coronary artery disease assessed by MDCT in patients with severe aortic stenosis. Massive calcifications complicated correct assessment of the lumen of coronary arteries.
Resumo:
Myotonic dystrophies type 1 (DM1) and type 2 (DM2) are the most common forms of muscular dystrophy affecting adults. They are autosomal dominant diseases caused by microsatellite tri- or tetranucleotide repeat expansion mutations in transcribed but not translated gene regions. The mutant RNA accumulates in nuclei disturbing the expression of several genes. The more recently identified DM2 disease is less well known, yet more than 300 patients have been confirmed in Finland thus far, and the true number is believed to be much higher. DM1 and DM2 share some features in general clinical presentation and molecular pathology, yet they show distinctive differences, including disease severity and differential muscle and fiber type involvement. However, the molecular differences underlying DM1 and DM2 muscle pathology are not well understood. Although the primary tissue affected is muscle, both DMs show a multisystemic phenotype due to wide expression of the mutation-carrying genes. DM2 is particularly intriguing, as it shows an incredibly wide spectrum of clinical manifestations. For this reason, it constitutes a real diagnostic challenge. The core symptoms in DM2 include proximal muscle weakness, muscle pain, myotonia, cataracts, cardiac conduction defects and endocrinological disturbations; however, none of these is mandatory for the disease. Myalgic pains may be the most disabling symptom for decades, sometimes leading to incapacity for work. In addition, DM2 may cause major socio-economical consequences for the patient, if not diagnosed, due to misunderstanding and false stigmatization. In this thesis work, we have (I) improved DM2 differential diagnostics based on muscle biopsy, and (II) described abnormalities in mRNA and protein expression in DM1 and DM2 patient skeletal muscles, showing partial differences between the two diseases, which may contribute to muscle pathology in these diseases. This is the first description of histopathological differences between DM1 and DM2, which can be used in differential diagnostics. Two novel high-resolution applications of in situ -hybridization have been described, which can be used for direct visualization of the DM2 mutation in muscle biopsy sections, or mutation size determination on extended DNA-fibers. By measuring protein and mRNA expression in the samples, differential changes in expression patterns affecting contractile proteins, other structural proteins and calcium handling proteins in DM2 compared to DM1 were found. The dysregulation at mRNA level was caused by altered transciption and abnormal splicing. The findings reported here indicate that the extent of aberrant splicing is higher in DM2 compared to DM1. In addition, the described abnormalities to some extent correlate to the differences in fiber type involvement in the two disorders.
Resumo:
The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).
Resumo:
Gene expression is one of the most critical factors influencing the phenotype of a cell. As a result of several technological advances, measuring gene expression levels has become one of the most common molecular biological measurements to study the behaviour of cells. The scientific community has produced enormous and constantly increasing collection of gene expression data from various human cells both from healthy and pathological conditions. However, while each of these studies is informative and enlighting in its own context and research setup, diverging methods and terminologies make it very challenging to integrate existing gene expression data to a more comprehensive view of human transcriptome function. On the other hand, bioinformatic science advances only through data integration and synthesis. The aim of this study was to develop biological and mathematical methods to overcome these challenges and to construct an integrated database of human transcriptome as well as to demonstrate its usage. Methods developed in this study can be divided in two distinct parts. First, the biological and medical annotation of the existing gene expression measurements needed to be encoded by systematic vocabularies. There was no single existing biomedical ontology or vocabulary suitable for this purpose. Thus, new annotation terminology was developed as a part of this work. Second part was to develop mathematical methods correcting the noise and systematic differences/errors in the data caused by various array generations. Additionally, there was a need to develop suitable computational methods for sample collection and archiving, unique sample identification, database structures, data retrieval and visualization. Bioinformatic methods were developed to analyze gene expression levels and putative functional associations of human genes by using the integrated gene expression data. Also a method to interpret individual gene expression profiles across all the healthy and pathological tissues of the reference database was developed. As a result of this work 9783 human gene expression samples measured by Affymetrix microarrays were integrated to form a unique human transcriptome resource GeneSapiens. This makes it possible to analyse expression levels of 17330 genes across 175 types of healthy and pathological human tissues. Application of this resource to interpret individual gene expression measurements allowed identification of tissue of origin with 92.0% accuracy among 44 healthy tissue types. Systematic analysis of transcriptional activity levels of 459 kinase genes was performed across 44 healthy and 55 pathological tissue types and a genome wide analysis of kinase gene co-expression networks was done. This analysis revealed biologically and medically interesting data on putative kinase gene functions in health and disease. Finally, we developed a method for alignment of gene expression profiles (AGEP) to perform analysis for individual patient samples to pinpoint gene- and pathway-specific changes in the test sample in relation to the reference transcriptome database. We also showed how large-scale gene expression data resources can be used to quantitatively characterize changes in the transcriptomic program of differentiating stem cells. Taken together, these studies indicate the power of systematic bioinformatic analyses to infer biological and medical insights from existing published datasets as well as to facilitate the interpretation of new molecular profiling data from individual patients.
Resumo:
The main aim of the present study was to develop information and communication technology (ICT) based chemistry education. The goals for the study were to support meaningful chemistry learning, research-based teaching and diffusion of ICT innovations. These goals were used as guidelines that form the theoretical framework for this study. This Doctoral Dissertation is based on eight-stage research project that included three design researches. These three design researches were scrutinized as separate case studies in which the different cases were formed according to different design teams: i) one researcher was in charge of the design and teachers were involved in the research process, ii) a research group was in charge of the design and students were involved in the research process, and iii) the design was done by student teams, the research was done collaboratively, and the design process was coordinated by a researcher. The research projects were conducted using mixed method approach, which enabled a comprehensive view on education design. In addition, the three central areas of design research: problem analysis, design solution and design process were included in the research, which was guided by the main research questions formed according to these central areas: 1) design solution: what kind of elements are included in ICT-based learning environments that support meaningful chemistry learning and diffusion of innovation, 2) problem analysis: what kind of new possibilities the designed learning environments offer for the support of meaningful chemistry learning, and 3) design process: what kind of opportunities and challenges does collaboration bring to the design of ICT-based learning environments? The main research questions were answered according to the analysis of the survey and observation data, six designed learning environments and ten design narratives from the three case studies. Altogether 139 chemistry teachers and teacher students were involved in the design processes. The data was mainly analysed by methods of qualitative content analysis. The first main result from the study give new information on the meaningful chemistry learning and the elements of ICT-based learning environment that support the diffusion of innovation, which can help in the development of future ICT-education design. When the designed learning environment was examined in the context of chemistry education, it was evident that an ICT-based chemistry learning environment supporting the meaningful learning of chemistry motivates the students and makes the teacher s work easier. In addition, it should enable the simultaneous fulfilment of several pedagogical goals and activate higher-level cognitive processes. The learning environment supporting the diffusion of ICT innovation is suitable for Finnish school environment, based on open source code, and easy to use with quality chemistry content. According to the second main result, new information was acquired about the possibilities of ICT-based learning environments in supporting meaningful chemistry learning. This will help in setting the goals for future ICT education. After the analysis of design solutions and their evaluations, it can be said that ICT enables the recognition of all elements that define learning environments (i.e. didactic, physical, technological and social elements). The research particularly demonstrates the significance of ICT in supporting students motivation and higher-level cognitive processes as well as versatile visualization resources for chemistry that ICT makes possible. In addition, research-based teaching method supports well the diffusion of studied innovation on individual level. The third main result brought out new information on the significance of collaboration in design research, which guides the design of ICT education development. According to the analysis of design narratives, it can be said that collaboration is important in the execution of scientifically reliable design research. It enables comprehensive requirement analysis and multifaceted development, which improves the reliability and validity of the research. At the same time, it sets reliability challenges by complicating documenting and coordination, for example. In addition, a new method for design research was developed. Its aim is to support the execution of complicated collaborative design projects. To increase the reliability and validity of the research, a model theory was used. It enables time-pound documenting and visualization of design decisions that clarify the process. This improves the reliability of the research. The validity of the research is improved by requirement definition through models. This way learning environments that meet the design goals can be constructed. The designed method can be used in education development from comprehensive to higher level. It can be used to recognize the needs of different interest groups and individuals with regard to processes, technology and substance knowledge as well as interfaces and relations between them. The developed method has also commercial potential. It is used to design learning environments for national and international market.