950 resultados para Prior knowledge
Resumo:
A firm's prior knowledge facilitates the absorption of new knowledge, thereby renewing a firm's systematic search, transfer and absorption capabilities. The rapidly expanding field of biotechnology is characterised by the convergence of disparate sciences and technologies. This paper, the shift from proteinbased to DNA-based diagnostic technologies, quantifies the value of a firm's prior knowledge and its relation to future knowledge development. Four dimensions of diagnostic and four dimensions of knowledge in biotechnology firms are analysed. A simple scaled matrix method is developed to quantify the positive and negative heuristic values of prior scientific and technological knowledge that is useful for the acquisition and absorption of new knowledge.
Resumo:
Knowledge, especially scientific and technological knowledge, grows according to knowledge trajectories and guideposts that make up the prior knowledge of an organization. We argue that these knowledge structures and their specific components lead to successful innovation. A firm's prior knowledge facilitates the absorption of new knowledge, thereby renewing a firm's systematic search, transfer and acquisition of knowledge and capabilities. In particular, the exponential growth in biotechnology is characterized by the convergence of disparate scientific and technological knowledge resources. This paper examines the shift from protein-based to DNA-based diagnostic technologies as an example, to quantify the value of a firm's prior knowledge using relative values of knowledge distance. The distance between core prior knowledge and the rate of transition from one knowledge system to another has been identified as a proxy for the value a firm's prior knowledge. The overall difficulty of transition from one technology paradigm to another is discussed. We argue this transition is possible when the knowledge distance is minimal and the transition process has a correspondingly high value of absorptive capacities. Our findings show knowledge distance is a determinant of the feasibility, continuity and capture of scientific and technological knowledge. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
The recent developments in high magnetic field 13C magnetic resonance spectroscopy with improved localization and shimming techniques have led to important gains in sensitivity and spectral resolution of 13C in vivo spectra in the rodent brain, enabling the separation of several 13C isotopomers of glutamate and glutamine. In this context, the assumptions used in spectral quantification might have a significant impact on the determination of the 13C concentrations and the related metabolic fluxes. In this study, the time domain spectral quantification algorithm AMARES (advanced method for accurate, robust and efficient spectral fitting) was applied to 13 C magnetic resonance spectroscopy spectra acquired in the rat brain at 9.4 T, following infusion of [1,6-(13)C2 ] glucose. Using both Monte Carlo simulations and in vivo data, the goal of this work was: (1) to validate the quantification of in vivo 13C isotopomers using AMARES; (2) to assess the impact of the prior knowledge on the quantification of in vivo 13C isotopomers using AMARES; (3) to compare AMARES and LCModel (linear combination of model spectra) for the quantification of in vivo 13C spectra. AMARES led to accurate and reliable 13C spectral quantification similar to those obtained using LCModel, when the frequency shifts, J-coupling constants and phase patterns of the different 13C isotopomers were included as prior knowledge in the analysis.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
Fast interceptive actions, such as catching a ball, rely upon accurate and precise information from vision. Recent models rely on flexible combinations of visual angle and its rate of expansion of which the tau parameter is a specific case. When an object approaches an observer, however, its trajectory may introduce bias into tau-like parameters that render these computations unacceptable as the sole source of information for actions. Here we show that observer knowledge of object size influences their action timing, and known size combined with image expansion simplifies the computations required to make interceptive actions and provides a route for experience to influence interceptive action.
Resumo:
A basic principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: the underlying data generating mechanism exhibits known symmetric property and the underlying process obeys a set of given boundary value constraints. The class of orthogonal least squares regression algorithms can readily be applied to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
A common debate among dermatopathologists is that prior knowledge of the clinical picture of melanocytic skin neoplasms may introduce a potential bias in the histopathologic examination. Histologic slides from 99 melanocytic skin neoplasms were circulated among 10 clinical dermatologists, all of them formally trained and board-certified dermatopathologists: 5 dermatopathologists had clinical images available after a 'blind' examination (Group 1); the other 5 had clinical images available before microscopic examination (Group 2). Data from the two groups were compared regarding 'consensus' (a diagnosis in agreement by ≥4 dermatopathologists/group), chance-corrected interobserver agreement (Fleiss' k) and level of diagnostic confidence (LDC: a 1-5 arbitrary scale indicating 'increasing reliability' of any given diagnosis). Compared with Group 1 dermatopathologists, Group 2 achieved a lower number of consensus (84 vs. 90) but a higher k value (0.74 vs. 0.69) and a greater mean LDC value (4.57 vs. 4.32). The same consensus was achieved by the two groups in 81/99 cases. Spitzoid neoplasms were most frequently controversial for both groups. The histopathologic interpretation of melanocytic neoplasms seems to be not biased by the knowledge of the clinical picture before histopathologic examination.
Resumo:
Visualising data for exploratory analysis is a big challenge in scientific and engineering domains where there is a need to gain insight into the structure and distribution of the data. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are used, but it is difficult to incorporate prior knowledge about structure of the data into the analysis. In this technical report we discuss a complementary approach based on an extension of a well known non-linear probabilistic model, the Generative Topographic Mapping. We show that by including prior information of the covariance structure into the model, we are able to improve both the data visualisation and the model fit.
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
This article presents two novel approaches for incorporating sentiment prior knowledge into the topic model for weakly supervised sentiment analysis where sentiment labels are considered as topics. One is by modifying the Dirichlet prior for topic-word distribution (LDA-DP), the other is by augmenting the model objective function through adding terms that express preferences on expectations of sentiment labels of the lexicon words using generalized expectation criteria (LDA-GE). We conducted extensive experiments on English movie review data and multi-domain sentiment dataset as well as Chinese product reviews about mobile phones, digital cameras, MP3 players, and monitors. The results show that while both LDA-DP and LDAGE perform comparably to existing weakly supervised sentiment classification algorithms, they are much simpler and computationally efficient, rendering themmore suitable for online and real-time sentiment classification on the Web. We observed that LDA-GE is more effective than LDA-DP, suggesting that it should be preferred when considering employing the topic model for sentiment analysis. Moreover, both models are able to extract highly domain-salient polarity words from text.
Resumo:
In this paper a prior knowledge representation for Artificial General Intelligence is proposed based on fuzzy rules using linguistic variables. These linguistic variables may be produced by neural network. Rules may be used for generation of basic emotions – positive and negative, which influence on planning and execution of behavior. The representation of Three Laws of Robotics as such prior knowledge is suggested as highest level of motivation in AGI.
Resumo:
Here, we describe gene expression compositional assignment (GECA), a powerful, yet simple method based on compositional statistics that can validate the transfer of prior knowledge, such as gene lists, into independent data sets, platforms and technologies. Transcriptional profiling has been used to derive gene lists that stratify patients into prognostic molecular subgroups and assess biomarker performance in the pre-clinical setting. Archived public data sets are an invaluable resource for subsequent in silico validation, though their use can lead to data integration issues. We show that GECA can be used without the need for normalising expression levels between data sets and can outperform rank-based correlation methods. To validate GECA, we demonstrate its success in the cross-platform transfer of gene lists in different domains including: bladder cancer staging, tumour site of origin and mislabelled cell lines. We also show its effectiveness in transferring an epithelial ovarian cancer prognostic gene signature across technologies, from a microarray to a next-generation sequencing setting. In a final case study, we predict the tumour site of origin and histopathology of epithelial ovarian cancer cell lines. In particular, we identify and validate the commonly-used cell line OVCAR-5 as non-ovarian, being gastrointestinal in origin. GECA is available as an open-source R package.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467-475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters' prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters' prior beliefs. (c) 2006 Elsevier B.V. All rights reserved.