44 resultados para Learning from Examples


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the past, research in ontology learning from text has mainly focused on entity recognition, taxonomy induction and relation extraction. In this work we approach a challenging research issue: detecting semantic frames from texts and using them to encode web ontologies. We exploit a new generation Natural Language Processing technology for frame detection, and we enrich the frames acquired so far with argument restrictions provided by a super-sense tagger and domain specializations. The results are encoded according to a Linguistic MetaModel, which allows a complete translation of lexical resources and data acquired from text, enabling custom transformations of the enriched frames into modular ontology components.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Summary: Particulate air pollution is associated with increased cardiovascular risk. The induction of systemic inflammation following particle inhalation represents a plausible mechanistic pathway. The purpose of this study was to assess the associations of short-term exposure to ambient particulate matters of aerodynamic diameter less than 10 μm (PM10) with circulating inflammatory markers in 6183 adults in Lausanne, Switzerland. The results show that short-term exposure to PM10 was associated with higher levels of circulating IL-6 and TNF-α. The positive association of PM10 with markers of systemic inflammation materializes the link between air pollution and cardiovascular risk. Background: Variations in short-term exposure to particulate matters (PM) have been repeatedly associated with daily all-cause mortality. Particle-induced inflammation has been postulated to be one of the important mechanisms for increased cardiovascular risk. Experimental in-vitro, in-vivo and controlled human studies suggest that interleukin 6 (IL-6) and tumor-necrosis-factor alpha (TNF-α) could represent key mediators of the inflammatory response to PM. The associations of short-term exposure to ambient PM with circulating inflammatory markers have been inconsistent in studies including specific subgroups so far. The epidemiological evidence linking short-term exposure to ambient PM and systemic inflammation in the general population is scarce. So far, large-scale population-based studies have not explored important inflammatory markers such as IL-6, IL-1β or TNF-α. We therefore analyzed the associations between short-term exposure to ambient PM10 and circulating levels of high-sensitive CRP (hs-CRP), IL-6, IL-1β and TNF-α in the population-based CoLaus study. Objectives: To assess the associations of short-term exposure to ambient particulate matters of aerodynamic diameter less than 10 μm (PM10) with circulating inflammatory markers, including hs-CRP, IL-6, IL-1β and TNF-α, in adults aged 35 to 75 years from the general population. Methodology: All study subjects were participants to the CoLaus study (www.colaus.ch) and the baseline examination was carried out from 2003 to 2006. Overall, 6184 participants were included. For the present analysis, 6183 participants had data on at least one of the four measured circulating inflammatory markers. The monitoring data was obtained from the website of Swiss National Air Pollution Monitoring Network (NABEL). We analyzed data on PM10 as well as outside air temperature, pressure and humidity. Hourly concentrations of PM10 were collected from 1 January 2003 to 31 December 2006. Robust linear regression (PROC ROBUSTREG) was used to evaluate the relationship between cytokine inflammatory and PM10. We adjusted all analyses for age, sex, body mass index, smoking status, alcohol consumption, diabetes status, hypertension status, education levels, zip code, and statin intake. All data were adjusted for the effects of weather by including temperature, barometric pressure, and season as covariates in the adjusted models. We performed simple and multiple logistic regression analyses. Descriptive statistical analysis used the Wilcoxon rank sum test (for medians). All data analyses were performed using SAS software (version 9.2; SAS Institute Inc., Cary, NC, USA), and a two-sided significance level of 5% was used. Results: PM10 levels averaged over 24 hours were significantly and positively associated with continuous IL-6 and TNF-α levels, in the whole study population both in unadjusted and adjusted analyses. For each cytokine, there was a similar seasonal pattern, with wider confidence intervals in summer than during the other seasons, which might partly be due to the smaller number of participants examined in summer. The associations of PM10 with IL-6 and TNF-α were also found after having dichotomized these cytokines into high versus low levels, which suggests that the associations of PM10 with the continuous cytokine levels are very robust to any distributional assumption and to potential outlier values. In contrast with what we observed for continuous IL-1β levels, high PM10 levels were significantly associated with high IL-1β. PM10 was significantly associated with IL-6 and TNF-α in men, but with TNF-α only in women. However, there was no significant statistical interaction between PM10 and sex. For IL-6 and TNF-α, the associations tended to be stronger in younger people, with a significant interaction between PM10 and age groups for IL-6. PM10 was significantly associated with IL-6 and TNF-α in the healthy group and also in the "non-healthy" group, although the statistical interaction between healthy status and PM10 was not significant. Conclusion: In summary, we found significant independent positive associations of short-term exposure to PM10 with circulating levels of IL-6 and TNF-α in the adult population of Lausanne. Our findings strongly support the idea that short-term exposure to PM10 is sufficient to induce systemic inflammation on a broad scale in the general population. From a public health perspective, the reported association of elevated inflammatory cytokines with short-term exposure to PM10 in a city with relatively clean air such as Lausanne supports the importance of limiting urban air pollution levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses basic theoretical strategies used to deal with measurement uncertainties arising from different experimental situations. It attempts to indicate the most appropriate method of obtaining a reliable estimate of the quantity to be evaluated depending on the characteristics of the data available. The theoretical strategies discussed are supported by experimental detail, and the conditions and results have been taken from examples in the field of radionuclide metrology. Special care regarding the correct treatment of covariances is emphasized because of the unreliability of the results obtained if these are neglected

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The high density of slope failures in western Norway is due to the steep relief and to the concentration of various structures that followed protracted ductile and brittle tectonics. On the 72 investigated rock slope instabilities, 13 were developed in soft weathered mafic and phyllitic allochthons. Only the intrinsic weakness of such rocks increases the susceptibility to gravitational deformation. In contrast, the gravitational structures in the hard gneisses reactivate prominent ductile or/and brittle fabrics. At 30 rockslides along cataclinal slopes, weak mafic layers of foliation are reactivated as basal planes. Slope-parallel steep foliation forms back-cracks of unstable columns. Folds are specifically present in the Storfjord area, together with a clustering of potential slope failures. Folding increases the probability of having favourably orientated planes with respect to the gravitational forces and the slope. High water pressure is believed to seasonally build up along the shallow-dipping Caledonian detachments and may contribute to destabilization of the rock slope upwards. Regional cataclastic faults localized the gravitational structures at 45 sites. The volume of the slope instabilities tends to increase with the amount of reactivated prominent structures and the spacing of the latter controls the size of instabilities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The assessment of medical technologies has to answer several questions ranging from safety and effectiveness to complex economical, social, and health policy issues. The type of data needed to carry out such evaluation depends on the specific questions to be answered, as well as on the stage of development of a technology. Basically two types of data may be distinguished: (a) general demographic, administrative, or financial data which has been collected not specifically for technology assessment; (b) the data collected with respect either to a specific technology or to a disease or medical problem. On the basis of a pilot inquiry in Europe and bibliographic research, the following categories of type (b) data bases have been identified: registries, clinical data bases, banks of factual and bibliographic knowledge, and expert systems. Examples of each category are discussed briefly. The following aims for further research and practical goals are proposed: criteria for the minimal data set required, improvement to the registries and clinical data banks, and development of an international clearinghouse to enhance information diffusion on both existing data bases and available reports on medical technology assessments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ecologically and evolutionarily oriented research on learning has traditionally been carried out on vertebrates and bees. While less sophisticated than those animals, fruit flies (Drosophila) are capable of several forms of learning, and have an advantage of a short generation time, which makes them an ideal system for experimental evolution studies. This review summarizes the insights into evolutionary questions about learning gained in the last decade from evolutionary experiments on Drosophila. These experiments demonstrate that Drosophila have the genetic potential to evolve substantially improved learning performance in ecologically relevant learning tasks. In at least one set of selected populations the improved learning generalized to another task than that used to impose selection, involving a different behavior, different stimuli, and a different sensory channel for the aversive reinforcement. This improvement in learning ability was associated with reduction in other fitness-related traits, such as larval competitive ability and lifespan, pointing out to evolutionary trade-offs of improved learning. These trade-offs were confirmed by other evolutionary experiments where reduction in learning performance was observed as a correlated response to selection for tolerance to larval nutritional stress or for delayed aging. Such trade-offs could be one reason why fruit flies have not fully used up their evolutionary potential for learning ability. Finally, another evolutionary experiment with Drosophila provided the first direct evidence for the long-standing ideas that learning can under some circumstances accelerate and in other slow down genetically-based evolutionary change. These results demonstrate the usefulness of fruit flies as a model system to address evolutionary questions about learning.