31 resultados para Collaborative learning flow pattern
Resumo:
One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.
Resumo:
The potential of type-2 fuzzy sets for managing high levels of uncertainty in the subjective knowledge of experts or of numerical information has focused on control and pattern classification systems in recent years. One of the main challenges in designing a type-2 fuzzy logic system is how to estimate the parameters of type-2 fuzzy membership function (T2MF) and the Footprint of Uncertainty (FOU) from imperfect and noisy datasets. This paper presents an automatic approach for learning and tuning Gaussian interval type-2 membership functions (IT2MFs) with application to multi-dimensional pattern classification problems. T2MFs and their FOUs are tuned according to the uncertainties in the training dataset by a combination of genetic algorithm (GA) and crossvalidation techniques. In our GA-based approach, the structure of the chromosome has fewer genes than other GA methods and chromosome initialization is more precise. The proposed approach addresses the application of the interval type-2 fuzzy logic system (IT2FLS) for the problem of nodule classification in a lung Computer Aided Detection (CAD) system. The designed IT2FLS is compared with its type-1 fuzzy logic system (T1FLS) counterpart. The results demonstrate that the IT2FLS outperforms the T1FLS by more than 30% in terms of classification accuracy.
Reorganization of a deeply incised drainage: role of deformation, sedimentation and groundwater flow
Resumo:
Deeply incised drainage networks are thought to be robust and not easily modified, and are commonly used as passive markers of horizontal strain. Yet, reorganizations (rearrangements) appear in the geologic record. We provide field evidence of the reorganization of a Miocene drainage network in response to strike-slip and vertical displacements in Guatemala. The drainage was deeply incised into a 50-km-wide orogen located along the North America-Caribbean plate boundary. It rearranged twice, first during the Late Miocene in response to transpressional uplift along the Polochic fault, and again in the Quaternary in response to transtensional uplift along secondary faults. The pattern of reorganization resembles that produced by the tectonic defeat of rivers that cross growing tectonic structures. Compilation of remote sensing data, field mapping, sediment provenance study, grain-size analysis and Ar(40)/Ar(39) dating from paleovalleys and their fill reveals that the classic mechanisms of river diversion, such as river avulsion over bedrock, or capture driven by surface runoff, are not sufficient to produce the observed diversions. The sites of diversion coincide spatially with limestone belts and reactivated fault zones, suggesting that solution-triggered or deformation-triggered permeability have helped breaching of interfluves. The diversions are also related temporally and spatially to the accumulation of sediment fills in the valleys, upstream of the rising structures. We infer that the breaching of the interfluves was achieved by headward erosion along tributaries fed by groundwater flow tracking from the valleys soon to be captured. Fault zones and limestone belts provided the pathways, and the aquifers occupying the valley fills provided the head pressure that enhanced groundwater circulation. The defeat of rivers crossing the rising structures results essentially from the tectonically enhanced activation of groundwater flow between catchments.
Resumo:
Closely related species may be very difficult to distinguish morphologically, yet sometimes morphology is the only reasonable possibility for taxonomic classification. Here we present learning-vector-quantization artificial neural networks as a powerful tool to classify specimens on the basis of geometric morphometric shape measurements. As an example, we trained a neural network to distinguish between field and root voles from Procrustes transformed landmark coordinates on the dorsal side of the skull, which is so similar in these two species that the human eye cannot make this distinction. Properly trained neural networks misclassified only 3% of specimens. Therefore, we conclude that the capacity of learning vector quantization neural networks to analyse spatial coordinates is a powerful tool among the range of pattern recognition procedures that is available to employ the information content of geometric morphometrics.
Resumo:
Age-related cognitive impairments were studied in rats kept in semi-enriched conditions during their whole life, and tested during ontogeny and adult life in various classical spatial tasks. In addition, the effect of intrahippocampal grafts of fetal septal-diagonal band tissue, rich in cholinergic neurons, was studied in some of these subjects. The rats received bilateral cell suspensions when aged 23-24 months. Starting 4 weeks after grafting, they were trained during 5 weeks in an 8-arm maze made of connected plexiglass tunnels. No age-related impairment was detected during the first eight trials, when the maze shape was that of a classical radial maze in which the rats had already been trained when young. The older rats were impaired when the task was made more difficult by rendering two arms parallel to each other. They developed an important neglect of one of the parallel tunnels resulting in a high amount of errors before completion of the task. In addition, the old rats developed a systematic response pattern of visits to adjacent arms in a sequence, which was not observed in the younger subjects. None of these behaviours were observed in the old rats with a septal transplant. Sixteen weeks after grafting, another experiment was conducted in a homing hole board task. Rats were allowed to escape from a large circular arena through one hole out of many, and to reach home via a flexible tube under the table. The escape hole was at a fixed position according to distant room cues, and olfactory cues were made irrelevant by rotating the table between the trials. An additional cue was placed on the escape position. No age-related difference in escape was observed during training. During a probe trial with no hole connected and no proximal cue present, the old untreated rats were less clearly focussed on the training sector than were either the younger or the grafted old subjects. Taken together, these experiments indicate that enriched housing conditions and spatial training during adult life do not protect against all age-related deterioration in spatial ability. However, it might be that the considerable improvement observed in the grafted subjects results from an interaction between the graft treatment and the housing conditions.
Resumo:
The objective of this study was to assess breeding and dispersal patterns of both males and females in a monogyne (a single queen per colony) population of ants. Monogyny is commonly associated with extensive nuptial flights, presumably leading to considerable gene flow over large areas. Opposite to these expectations we found evidence of both inbreeding and sex-biased gene flow in a monogyne population of Formica exsecta. We found a significant degree of population subdivision at a local scale (within islands) for queens (females heading established colonies) and workers, but not for colony fathers (the males mated to the colony queens). However, we found little evidence of population subdivision at a larger scale (among islands). More conclusive support for sex-biased gene flow comes from the analysis of isolation by distance on the largest island, and from assignment tests revealing differences in female and male philopatry. The genetic similarity between pairs of queens decreased significantly when geographical distance increased, demonstrating limited dispersal and isolation by distance in queens. By contrast, we found no such pattern for colony fathers. Furthermore, a significantly greater fraction of colony queens were assigned as having originated from the population of residence, as compared to colony fathers. Inbreeding coefficients were significantly positive for workers, but not for mother queens. The queen-male relatedness coefficient of 0.23 (regression relatedness) indicates that mating occurs between fairly close relatives. These results suggest that some monogyne species of ants have complex dispersal and mating systems that can result in genetic isolation by distance over small geographical scales. More generally, this study also highlights the importance of identifying the relevant scale in analyses of population structure and dispersal.
Resumo:
1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.
Resumo:
OBJECTIVES: To evaluate the renal function outcome in children with unilateral hydronephrosis and urinary flow impairment at the pelviureteral junction with respect to the therapeutic strategy. METHODS: We retrospectively selected 45 children with iodine-123-hippuran renography performed at diagnosis and after 3 or more years of follow-up. All children had bilateral nonobstructive pattern findings on diuretic renography at follow-up. Eleven children were treated conservatively, and 34 underwent unilateral pyeloplasty. Split and individual renal function, measured by an accumulation index, was computed from background-corrected renograms for the affected and contralateral kidneys at diagnosis and the follow-up examination. RESULTS: Of 11 children treated conservatively, 9 had normal bilateral function at diagnosis, all had reached normal function at follow-up. Of the 34 operated kidneys, 12 (38%) had initially normal function that remained normal at the follow-up examination, and 22 had impaired function that had normalized at the follow-up examination in 15 (68%). The function of the contralateral kidneys was increased in 5 of 8 children with persistently abnormal affected kidneys. Pyeloplasty was performed in 23 children (68%) and 11 children (32%) younger and older than 1 year, respectively. The function of the affected kidneys increased in both groups, but normalization occurred only in the younger children. CONCLUSIONS: Of the children selected for conservative treatment, 82% had normal bilateral renal function at diagnosis that was normal in all at the follow-up examination. Of the children treated surgically, 65% had initially impaired function of the affected kidney that improved in 87% after pyeloplasty. Normalization of function was observed only in children who were younger than 1 year old at surgery. Persistently low function of the affected kidney was compensated for by the contralateral one regardless of the age at surgery.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
BACKGROUND: The mechanism behind early graft failure after right ventricular outflow tract (RVOT) reconstruction is not fully understood. Our aim was to establish a three-dimensional computational fluid dynamics (CFD) model of RVOT to investigate the hemodynamic conditions that may trigger the development of intimal hyperplasia and arteriosclerosis. METHODS: Pressure, flow, and diameter at the RVOT, pulmonary artery (PA), bifurcation of the PA, and left and right PAs were measured in 10 normal pigs with a mean weight of 24.8 ± 0.78 kg. Data obtained from the experimental scenario were used for CFD simulation of pressure, flow, and shear stress profile from the RVOT to the left and right PAs. RESULTS: Using experimental data, a CFD model was obtained for 2.0 and 2.5-L/min pulsatile inflow profiles. In both velocity profiles, time and space averaged in the low-shear stress profile range from 0-6.0 Pa at the pulmonary trunk, its bifurcation, and at the openings of both PAs. These low-shear stress areas were accompanied to high-pressure regions 14.0-20.0 mm Hg (1866.2-2666 Pa). Flow analysis revealed a turbulent flow at the PA bifurcation and ostia of both PAs. CONCLUSIONS: Identified local low-shear stress, high pressure, and turbulent flow correspond to a well-defined trigger pattern for the development of intimal hyperplasia and arteriosclerosis. As such, this real-time three-dimensional CFD model may in the future serve as a tool for the planning of RVOT reconstruction, its analysis, and prediction of outcome.
Resumo:
BACKGROUND: Cellular processes underlying memory formation are evolutionary conserved, but natural variation in memory dynamics between animal species or populations is common. The genetic basis of this fascinating phenomenon is poorly understood. Closely related species of Nasonia parasitic wasps differ in long-term memory (LTM) formation: N. vitripennis will form transcription-dependent LTM after a single conditioning trial, whereas the closely-related species N. giraulti will not. Genes that were differentially expressed (DE) after conditioning in N. vitripennis, but not in N. giraulti, were identified as candidate genes that may regulate LTM formation. RESULTS: RNA was collected from heads of both species before and immediately, 4 or 24 hours after conditioning, with 3 replicates per time point. It was sequenced strand-specifically, which allows distinguishing sense from antisense transcripts and improves the quality of expression analyses. We determined conditioning-induced DE compared to naïve controls for both species. These expression patterns were then analysed with GO enrichment analyses for each species and time point, which demonstrated an enrichment of signalling-related genes immediately after conditioning in N. vitripennis only. Analyses of known LTM genes and genes with an opposing expression pattern between the two species revealed additional candidate genes for the difference in LTM formation. These include genes from various signalling cascades, including several members of the Ras and PI3 kinase signalling pathways, and glutamate receptors. Interestingly, several other known LTM genes were exclusively differentially expressed in N. giraulti, which may indicate an LTM-inhibitory mechanism. Among the DE transcripts were also antisense transcripts. Furthermore, antisense transcripts aligning to a number of known memory genes were detected, which may have a role in regulating these genes. CONCLUSION: This study is the first to describe and compare expression patterns of both protein-coding and antisense transcripts, at different time points after conditioning, of two closely related animal species that differ in LTM formation. Several candidate genes that may regulate differences in LTM have been identified. This transcriptome analysis is a valuable resource for future in-depth studies to elucidate the role of candidate genes and antisense transcription in natural variation in LTM formation.