935 resultados para accessibility analysis tools
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
PURPOSE OF REVIEW: The kidney plays an essential role in maintaining sodium and water balance, thereby controlling the volume and osmolarity of the extracellular body fluids, the blood volume and the blood pressure. The final adjustment of sodium and water reabsorption in the kidney takes place in cells of the distal part of the nephron in which a set of apical and basolateral transporters participate in vectorial sodium and water transport from the tubular lumen to the interstitium and, finally, to the general circulation. According to a current model, the activity and/or cell-surface expression of these transporters is/are under the control of a gene network composed of the hormonally regulated, as well as constitutively expressed, genes. It is proposed that this gene network may include new candidate genes for salt- and water-losing syndromes and for salt-sensitive hypertension. A new generation of functional genomics techniques have recently been applied to the characterization of this gene network. The purpose of this review is to summarize these studies and to discuss the potential of the different techniques for characterization of the renal transcriptome. RECENT FINDINGS: Recently, DNA microarrays and serial analysis of gene expression have been applied to characterize the kidney transcriptome in different in-vivo and in-vitro models. In these studies, a set of new interesting genes potentially involved in the regulation of sodium and water reabsorption by the kidney have been identified and are currently under detailed investigation. SUMMARY: Characterization of the kidney transcriptome is greatly expanding our knowledge of the gene networks involved in multiple kidney functions, including the maintenance of sodium and water homeostasis.
Resumo:
Les plantes sont essentielles pour les sociétés humaines. Notre alimentation quotidienne, les matériaux de constructions et les sources énergétiques dérivent de la biomasse végétale. En revanche, la compréhension des multiples aspects développementaux des plantes est encore peu exploitée et représente un sujet de recherche majeur pour la science. L'émergence des technologies à haut débit pour le séquençage de génome à grande échelle ou l'imagerie de haute résolution permet à présent de produire des quantités énormes d'information. L'analyse informatique est une façon d'intégrer ces données et de réduire la complexité apparente vers une échelle d'abstraction appropriée, dont la finalité est de fournir des perspectives de recherches ciblées. Ceci représente la raison première de cette thèse. En d'autres termes, nous appliquons des méthodes descriptives et prédictives combinées à des simulations numériques afin d'apporter des solutions originales à des problèmes relatifs à la morphogénèse à l'échelle de la cellule et de l'organe. Nous nous sommes fixés parmi les objectifs principaux de cette thèse d'élucider de quelle manière l'interaction croisée des phytohormones auxine et brassinosteroïdes (BRs) détermine la croissance de la cellule dans la racine du méristème apical d'Arabidopsis thaliana, l'organisme modèle de référence pour les études moléculaires en plantes. Pour reconstruire le réseau de signalement cellulaire, nous avons extrait de la littérature les informations pertinentes concernant les relations entre les protéines impliquées dans la transduction des signaux hormonaux. Le réseau a ensuite été modélisé en utilisant un formalisme logique et qualitatif pour pallier l'absence de données quantitatives. Tout d'abord, Les résultats ont permis de confirmer que l'auxine et les BRs agissent en synergie pour contrôler la croissance de la cellule, puis, d'expliquer des observations phénotypiques paradoxales et au final, de mettre à jour une interaction clef entre deux protéines dans la maintenance du méristème de la racine. Une étude ultérieure chez la plante modèle Brachypodium dystachion (Brachypo- dium) a révélé l'ajustement du réseau d'interaction croisée entre auxine et éthylène par rapport à Arabidopsis. Chez ce dernier, interférer avec la biosynthèse de l'auxine mène à la formation d'une racine courte. Néanmoins, nous avons isolé chez Brachypodium un mutant hypomorphique dans la biosynthèse de l'auxine qui affiche une racine plus longue. Nous avons alors conduit une analyse morphométrique qui a confirmé que des cellules plus anisotropique (plus fines et longues) sont à l'origine de ce phénotype racinaire. Des analyses plus approfondies ont démontré que la différence phénotypique entre Brachypodium et Arabidopsis s'explique par une inversion de la fonction régulatrice dans la relation entre le réseau de signalisation par l'éthylène et la biosynthèse de l'auxine. L'analyse morphométrique utilisée dans l'étude précédente exploite le pipeline de traitement d'image de notre méthode d'histologie quantitative. Pendant la croissance secondaire, la symétrie bilatérale de l'hypocotyle est remplacée par une symétrie radiale et une organisation concentrique des tissus constitutifs. Ces tissus sont initialement composés d'une douzaine de cellules mais peuvent aisément atteindre des dizaines de milliers dans les derniers stades du développement. Cette échelle dépasse largement le seuil d'investigation par les moyens dits 'traditionnels' comme l'imagerie directe de tissus en profondeur. L'étude de ce système pendant cette phase de développement ne peut se faire qu'en réalisant des coupes fines de l'organe, ce qui empêche une compréhension des phénomènes cellulaires dynamiques sous-jacents. Nous y avons remédié en proposant une stratégie originale nommée, histologie quantitative. De fait, nous avons extrait l'information contenue dans des images de très haute résolution de sections transverses d'hypocotyles en utilisant un pipeline d'analyse et de segmentation d'image à grande échelle. Nous l'avons ensuite combiné avec un algorithme de reconnaissance automatique des cellules. Cet outil nous a permis de réaliser une description quantitative de la progression de la croissance secondaire révélant des schémas développementales non-apparents avec une inspection visuelle classique. La formation de pôle de phloèmes en structure répétée et espacée entre eux d'une longueur constante illustre les bénéfices de notre approche. Par ailleurs, l'exploitation approfondie de ces résultats a montré un changement de croissance anisotropique des cellules du cambium et du phloème qui semble en phase avec l'expansion du xylème. Combinant des outils génétiques et de la modélisation biomécanique, nous avons démontré que seule la croissance plus rapide des tissus internes peut produire une réorientation de l'axe de croissance anisotropique des tissus périphériques. Cette prédiction a été confirmée par le calcul du ratio des taux de croissance du xylème et du phloème au cours de développement secondaire ; des ratios élevés sont effectivement observés et concomitant à l'établissement progressif et tangentiel du cambium. Ces résultats suggèrent un mécanisme d'auto-organisation établi par un gradient de division méristématique qui génèrent une distribution de contraintes mécaniques. Ceci réoriente la croissance anisotropique des tissus périphériques pour supporter la croissance secondaire. - Plants are essential for human society, because our daily food, construction materials and sustainable energy are derived from plant biomass. Yet, despite this importance, the multiple developmental aspects of plants are still poorly understood and represent a major challenge for science. With the emergence of high throughput devices for genome sequencing and high-resolution imaging, data has never been so easy to collect, generating huge amounts of information. Computational analysis is one way to integrate those data and to decrease the apparent complexity towards an appropriate scale of abstraction with the aim to eventually provide new answers and direct further research perspectives. This is the motivation behind this thesis work, i.e. the application of descriptive and predictive analytics combined with computational modeling to answer problems that revolve around morphogenesis at the subcellular and organ scale. One of the goals of this thesis is to elucidate how the auxin-brassinosteroid phytohormone interaction determines the cell growth in the root apical meristem of Arabidopsis thaliana (Arabidopsis), the plant model of reference for molecular studies. The pertinent information about signaling protein relationships was obtained through the literature to reconstruct the entire hormonal crosstalk. Due to a lack of quantitative information, we employed a qualitative modeling formalism. This work permitted to confirm the synergistic effect of the hormonal crosstalk on cell elongation, to explain some of our paradoxical mutant phenotypes and to predict a novel interaction between the BREVIS RADIX (BRX) protein and the transcription factor MONOPTEROS (MP),which turned out to be critical for the maintenance of the root meristem. On the same subcellular scale, another study in the monocot model Brachypodium dystachion (Brachypodium) revealed an alternative wiring of auxin-ethylene crosstalk as compared to Arabidopsis. In the latter, increasing interference with auxin biosynthesis results in progressively shorter roots. By contrast, a hypomorphic Brachypodium mutant isolated in this study in an enzyme of the auxin biosynthesis pathway displayed a dramatically longer seminal root. Our morphometric analysis confirmed that more anisotropic cells (thinner and longer) are principally responsible for the mutant root phenotype. Further characterization pointed towards an inverted regulatory logic in the relation between ethylene signaling and auxin biosynthesis in Brachypodium as compared to Arabidopsis, which explains the phenotypic discrepancy. Finally, the morphometric analysis of hypocotyl secondary growth that we applied in this study was performed with the image-processing pipeline of our quantitative histology method. During its secondary growth, the hypocotyl reorganizes its primary bilateral symmetry to a radial symmetry of highly specialized tissues comprising several thousand cells, starting with a few dozens. However, such a scale only permits observations in thin cross-sections, severely hampering a comprehensive analysis of the morphodynamics involved. Our quantitative histology strategy overcomes this limitation. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with an automated cell type recognition algorithm, it allows precise quantitative characterization of vascular development and reveals developmental patterns that were not evident from visual inspection, for example the steady interspace distance of the phloem poles. Further analyses indicated a change in growth anisotropy of cambial and phloem cells, which appeared in phase with the expansion of xylem. Combining genetic tools and computational modeling, we showed that the reorientation of growth anisotropy axis of peripheral tissue layers only occurs when the growth rate of central tissue is higher than the peripheral one. This was confirmed by the calculation of the ratio of the growth rate xylem to phloem throughout secondary growth. High ratios are indeed observed and concomitant with the homogenization of cambium anisotropy. These results suggest a self-organization mechanism, promoted by a gradient of division in the cambium that generates a pattern of mechanical constraints. This, in turn, reorients the growth anisotropy of peripheral tissues to sustain the secondary growth.
Resumo:
Objectives: Quantitative ultrasound (QUS) is an attractive method for assessing fracture risk because it is portable, inexpensive, without ionizing radiation, and available in areas of the world where DXA is not readily accessible or affordable. However, the diversity of QUS scanners and variability of fracture outcomes measured in different studies is an important obstacle to widespread utilisation of QUS for fracture risk assessment. We aimed in this review to assess the predictive power of heel QUS for fractures considering different characteristics of the association (QUS parameters and fracture outcomes measured, QUS devices, study populations, and independence from DXA-measured bone density).Materials/Methods : We conducted an inverse-variance randomeffects meta-analysis of prospective studies with heel QUS measures at baseline and fracture outcomes in their follow-up. Relative risks (RR) per standard deviation (SD) of different QUS parameters (broadband ultrasound attenuation [BUA], speed of sound &SOS;, stiffness index &SI;, and quantitative ultrasound index [QUI]) for various fracture outcomes (hip, vertebral, any clinical, any osteoporotic, and major osteoporotic fractures) were reported based on study questions.Results : 21 studies including 55,164 women and 13,742 men were included with a total follow-up of 279,124 person-years. All four QUS parameters were associated with risk of different fractures. For instance, RR of hip fracture for 1 SD decrease of BUA was 1.69 (95% CI 1.43-2.00), SOS was 1.96 (95% CI 1.64-2.34), SI was 2.26 (95%CI 1.71-2.99), and QUI was 1.99 (95% CI 1.49-2.67). Validated devices from different manufacturers predicted fracture risks with a similar performance (meta-regression p-values>0.05 for difference of devices). There was no sign of publication bias among the studies. QUS measures predicted fracture with a similar performance in men and women. Meta-analysis of studies with QUS measures adjusted for hip DXA showed a significant and independent association with fracture risk (RR/SD for BUA =1.34 [95%CI 1.22-1.49]).Conclusions : This study confirms that QUS of the heel using validated devices predicts risk of different fracture outcomes in elderly men and women. Further research and international collaborations are needed for standardisation of QUS parameters across various manufacturers and inclusion of QUS in fracture risk assessment tools. Disclosure of Interest : None declared.
Resumo:
Fenix on suuri metsäteollisuuden tuotannonohjausjärjestelmä. Fenix-järjestelmän raportointi- ja tulostuspalvelut elävätvaihdekautta. Aikaisemmin käytetyt raportointityökalut ovat vanhentumassa ja neon korvattava uusilla. Uusi raportointialusta, Global Printing System (GPS), onrakennettu StreamServe Business Communication Platformin ympärille. Uuden alustan on tarkoitus hoitaa Fenixin tulostus sekä raportointitehtävät. Työ kuvaa raportointialustan toteutuksen sekä sen tärkeimmät ominaisuudet. Uuden alustan suorituskyvyssä on ollut toivomisen varaa. Etenkin suurien raporttien generoiminen on kestänyt joskus toivottoman pitkään. Työssä analysoidaan raportointialustan suorituskykyä ja etsitään mahdolliset pullonkaulat. Suorituskyvyn heikkouksiin pyritään löytämään ratkaisut ja annetaan ehdotuksia suorituskyvyn parantamiseksi. XML pohjaisena järjestelmänä GPS:n suorituskyvyssä suurta osaaesittää XML:n tehokkuus. GPS sisääntuleva data tulee XML-muodossa ja sisääntulon parsimisen tehokkuus on avaintekijöitä koko GPS:n tehokkuuden kannalta. Suorituskyvyn parantamisessa keskitytäänki vahvasti XML:n tehokkaampaan käyttöön ja esitetään ehdotuksia sen parantamiseksi.
Resumo:
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
Resumo:
The use by police services and inquiring agencies of forensic data in an intelligence perspective is still fragmentary and to some extent ignored. In order to increase the efficiency of criminal investigation to target illegal drug trafficking organisations and to provide valuable information about their methods, it is necessary to include and interpret objective drug analysis results already during the investigation phase. The value of visual, physical and chemical data of seized ecstasy tablets, as a support for criminal investigation on a strategic and tactical level has been investigated. In a first phase different characteristics of ecstasy tablets have been studied in order to define their relevance, variation, correlation and discriminating power in an intelligence perspective. During 5 years, over 1200 cases of ecstasy seizures (concerning about 150000 seized tablets) coming from different regions of Switzerland (City and Canton of Zurich, Cantons Ticino, Neuchâtel and Geneva) have been systematically recorded. This turned out to be a statistically representative database including large and small cases. During the second phase various comparison and clustering methods have been tested and evaluated, on the type and relevance of tablet characteristics, thus increasing knowledge about synthetic drugs, their manufacturing and trafficking. Finally analytical methodologies have been investigated and formalised, applying traditional intelligence methods. In this context classical tools, which are used in criminal analysis (like the I2 Analyst Notebook, I2 Ibase, ?) have been tested and adapted to address the specific need of forensic drug intelligence. The interpretation of these links provides valuable information about criminal organisations and their trafficking methods. In the final part of this thesis practical examples illustrate the use and value of such information.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
Finland has large forest fuel resources. However, the use of forest fuels for energy production has been low, except for small-scale use in heating. According to national action plans and programs related to wood energy promotion, the utilization of such resources will be multiplied over the next few years. The most significant part of this growth will be based on the utilization of forest fuels, produced from logging residues of regeneration fellings, in industrial and municipal power and heating plants. Availability of logging residues was analyzed by means of resource and demand approaches in order to identify the most suitable regions with focus on increasing the forest fuel usage. The analysis included availability and supply cost comparisons between power plant sites and resource allocation in a least cost manner, and between a predefined power plant structure under demand and supply constraints. Spatial analysis of worksite factors and regional geographies were carried out using the GIS-model environment via geoprocessing and cartographic modeling tools. According to the results of analyses, the cost competitiveness of forest fuel supply should be improved in order to achieve the designed objectives in the near future. Availability and supply costs of forest fuels varied spatially and were very sensitive to worksite factors and transport distances. According to the site-specific analysis the supply potential between differentlocations can be multifold. However, due to technical and economical reasons ofthe fuel supply and dense power plant infrastructure, the supply potential is limited at plant level. Therefore, the potential and supply cost calculations aredepending on site-specific matters, where regional characteristics of resourcesand infrastructure should be taken into consideration, for example by using a GIS-modeling approach constructed in this study.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
The in vivo accessibility of the chick embryo makes it a favoured model system for experimental developmental biology. Although the range of available techniques now extends to miss-expression of genes through in ovo electroporation, it remains difficult to knock out individual gene expression. Recently, the possibility of silencing gene expression by RNAi in chick embryos has been reported. However, published studies show only discrete quantitative differences in the expression of the endogenous targeted genes and unclear morphological alterations. To elucidate whether the tools currently available are adequate to silence gene expression sufficiently to produce a clear and specific null-like mutant phenotype, we have performed several experiments with different molecules that trigger RNAi: dsRNA, siRNA, and shRNA produced from a plasmid coexpressing green fluorescent protein as an internal marker. Focussing on fgf8 expression in the developing isthmus, we show that no morphological defects are observed, and that fgf8 expression is neither silenced in embryos microinjected with dsRNA nor in embryos microinjected and electroporated with a pool of siRNAs. Moreover, fgf8 expression was not significantly silenced in most isthmic cells transformed with a plasmid producing engineered shRNAs to fgf8. We also show that siRNA molecules do not spread significantly from cell to cell as reported for invertebrates, suggesting the existence of molecular differences between different model systems that may explain the different responses to RNAi. Although our results are basically in agreement with previously reported studies, we suggest, in contrast to them, that with currently available tools and techniques the number of cells in which fgf8 gene expression is decreased, if any, is not sufficient to generate a detectable mutant phenotype, thus making RNAi useless as a routine method for functional gene analysis in chick embryos.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
Tutkimuksen selvitettiin miten skenaarioanalyysia voidaan käyttää uuden teknologian tutkimisessa. Työssä havaittiin, että skenaarioanalyysin soveltuvuuteen vaikuttaa eniten teknologisen muutoksen taso ja saatavilla olevan tiedon luonne. Skenaariomenetelmä soveltuu hyvin uusien teknologioiden tutkimukseen erityisesti radikaalien innovaatioiden kohdalla. Syynä tähän on niihin liittyvä suuri epävarmuus, kompleksisuus ja vallitsevan paradigman muuttuminen, joiden takia useat muut tulevaisuuden tutkimuksen menetelmät eivät ole tilanteessa käyttökelpoisia. Työn empiirisessä osiossa tutkittiin hilaverkkoteknologian tulevaisuutta skenaarioanalyysin avulla. Hilaverkot nähtiin mahdollisena disruptiivisena teknologiana, joka radikaalina innovaationa saattaa muuttaa tietokonelaskennan nykyisestä tuotepohjaisesta laskentakapasiteetin ostamisesta palvelupohjaiseksi. Tällä olisi suuri vaikutus koko nykyiseen ICT-toimialaan erityisesti tarvelaskennan hyödyntämisen ansiosta. Tutkimus tarkasteli kehitystä vuoteen 2010 asti. Teorian ja olemassa olevan tiedon perusteella muodostettiin vahvaan asiantuntijatietouteen nojautuen neljä mahdollista ympäristöskenaariota hilaverkoille. Skenaarioista huomattiin, että teknologian kaupallinen menestys on vielä monen haasteen takana. Erityisesti luottamus ja lisäarvon synnyttäminen nousivat tärkeimmiksi hilaverkkojen tulevaisuutta ohjaaviksi tekijöiksi.
Resumo:
Tutkimuksen tavoitteena oli selvittää miten kehittää yrityksen nykyistä e-palvelujärjestelmää, Internet -teknologiaan perustuvaa sähköisiä kommunikaatio- ja tiedonjakojärjestelmää, yrityksen business-to-business asiakkuuksien johtamisessa. Tavoitteena oli myös luoda ehdotukset uusista e-palvelusopimusmalleista. Tutkimuksen teoriaosuudessa pyrittiin kehittämään aikaisempiin tutkimuksiin, tietokirjallisuuteen ja asiantuntijoihin perustuva viitekehysmalli. Empiirisessä osassa tutkimuksen tavoitteisiin pyrittiin haastattelemalla yrityksen asiakkaita ja henkilöstöä, sekä tarkastelemalla asiakaskontaktien nykyistä tilaa ja kehittymistä. Näiden tietojen perusteella selvitettiin e-palvelun käyttäjien tarpeita, profiilia ja valmiuksia palvelun käyttöön sekä palvelun nykyistä houkuttelevuutta. Tutkimuksen teoriaosan lähdeaineistona käytettiin kirjallisuutta, artikkeleita ja tilastoja asiakashallinnasta sekä e-palveluiden, erityisesti Internet ja verkkopalveluiden markkinoinnista, nykytilasta sekä palveluiden kehittämisestä. Lisäksi tutkittiin kirjallisuutta arvoverkostoanalyysistä, asiakkaan arvosta, informaatioteknologiasta, palvelun laadusta ja asiakastyytyväisyydestä. Tutkimuksen empiirinen osa perustuu yrityksen henkilöstöltä sekä asiakkailta haastatteluissa kerättyihin tietoihin, yrityksen ennalta keräämiin materiaaleihin sekä Taloustutkimuksen keräämiin tietoihin. Tutkimuksessa käytettiin case -menetelmää, joka oli yhdistelmä sekä kvalitatiivista että kvantitatiivista tutkimusta. Casen tarkoituksena oli testata mallin paikkansapitävyyttä ja käyttökelpoisuutta, sekä selvittää onko olemassa vielä muita tekijöitä, jotka vaikuttavat asiakkaan saamaan arvoon. Kvalitatiivinen aineisto perustuu teemahaastattelumenetelmää soveltaen haastateltuihin asiakkaisiin ja yrityksen työntekijöihin. Kvantitatiivinen tutkimus perustuu Taloustutkimuksen tutkimukseen ja yrityksen asiakaskontakteista kerättyyn tietoon. Haastatteluiden perusteella e-palvelut nähtiin hyödyllisinä ja tulevaisuudessa erittäin tärkeinä. E-palvelut nähdään yhtenä tärkeänä kanavana, perinteisten kanavien rinnalla, tehostaa business-to-business -asiakkuuksien johtamista. Tutkimuksen antamien tulosten mukaan asiakkaiden palveluun liittyvän tieto-, taito-, tarpeellisuus- ja kiinnostavuustasojen vaihtelevaisuus osoittaa selvän tarpeen eritasoisille e-palvelupaketti ratkaisuille. Tuloksista muodostettu ratkaisuehdotus käsittää neljän eri e-palvelupaketin rakentamisen asiakkaiden eri tarpeita mukaillen.