16 resultados para knowledge mapping
em Aston University Research Archive
Resumo:
Causal mapping can help managers to think through the causal influence between issues, enabling them to base a decision on a more structured consideration. Even in regular meetings, learning and the integration of knowledge from diverse stakeholders can benefit from causal mapping. Four causal mapping meetings with management teams are analysed to assess how managers thought causally about their environment when strategy-making. We found that although managers can use other views to expand their environmental knowledge, some prefer to use familiar information rather than less familiar information. Despite this preference, many managers thought systemically about a raft of related issues. We discuss our findings in the context of regular meetings and offer improvements to the facilitation of group causal mapping.
Resumo:
During group meetings it is often difficult for participants to effectively: share their knowledge to inform the outcome; acquire new knowledge from others to broaden and/or deepen their understanding; utilise all available knowledge to design an outcome; and record (to retain) the rationale behind the outcome to inform future activities. These are difficult because, for example: only one person can share knowledge at once which challenges effective sharing; information overload makes acquisition problematic and can marginalize important knowledge; and intense dialog of conflicting views makes recording more complex.
Resumo:
Visualising data for exploratory analysis is a big challenge in scientific and engineering domains where there is a need to gain insight into the structure and distribution of the data. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are used, but it is difficult to incorporate prior knowledge about structure of the data into the analysis. In this technical report we discuss a complementary approach based on an extension of a well known non-linear probabilistic model, the Generative Topographic Mapping. We show that by including prior information of the covariance structure into the model, we are able to improve both the data visualisation and the model fit.
Resumo:
This paper, addresses the problem of novelty detection in the case that the observed data is a mixture of a known 'background' process contaminated with an unknown other process, which generates the outliers, or novel observations. The framework we describe here is quite general, employing univariate classification with incomplete information, based on knowledge of the distribution (the 'probability density function', 'pdf') of the data generated by the 'background' process. The relative proportion of this 'background' component (the 'prior' 'background' 'probability), the 'pdf' and the 'prior' probabilities of all other components are all assumed unknown. The main contribution is a new classification scheme that identifies the maximum proportion of observed data following the known 'background' distribution. The method exploits the Kolmogorov-Smirnov test to estimate the proportions, and afterwards data are Bayes optimally separated. Results, demonstrated with synthetic data, show that this approach can produce more reliable results than a standard novelty detection scheme. The classification algorithm is then applied to the problem of identifying outliers in the SIC2004 data set, in order to detect the radioactive release simulated in the 'oker' data set. We propose this method as a reliable means of novelty detection in the emergency situation which can also be used to identify outliers prior to the application of a more general automatic mapping algorithm. © Springer-Verlag 2007.
Resumo:
Aim To undertake a national study of teaching, learning and assessment in UK schools of pharmacy. Design Triangulation of course documentation, 24 semi-structured interviews undertaken with 29 representatives from the schools and a survey of all final year students (n=1,847) in the 15 schools within the UK during 2003–04. Subjects and setting All established UK pharmacy schools and final year MPharm students. Outcome measures Data were combined and analysed under the topics of curriculum, teaching and learning, assessment, multi-professional teaching and learning, placement education and research projects. Results Professional accreditation was the main driver for curriculum design but links to preregistration training were poor. Curricula were consistent but offered little student choice. On average half the curriculum was science-based. Staff supported the science content but students less so. Courses were didactic but schools were experimenting with new methods of learning. Examinations were the principal form of assessment but the contribution of practice to the final degree ranged considerably (21–63%). Most students considered the assessment load to be about right but with too much emphasis upon knowledge. Assessment of professional competence was focused upon dispensing and pharmacy law. All schools undertook placement teaching in hospitals but there was little in community/primary care. There was little inter-professional education. Resources and logistics were the major limiters. Conclusions There is a need for an integrated review of the accreditation process for the MPharm and preregistration training and redefinition of professional competence at an undergraduate level.
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.
Resumo:
Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.
Resumo:
We analyze the steady-state propagation of optical pulses in fiber transmission systems with lumped nonlinear optical devices (NODs) placed periodically in the line. For the first time to our knowledge, a theoretical model is developed to describe the transmission regime with a quasilinear pulse evolution along the transmission line and the point action of NODs. We formulate the mapping problem for pulse propagation in a unit cell of the line and show that in the particular application to nonlinear optical loop mirrors, the steady-state pulse characteristics predicted by the theory accurately reproduce the results of direct numerical simulations.
Resumo:
We analyze the steady-state propagation of optical pulses in fiber transmission systems with lumped nonlinear optical devices (NODs) placed periodically in the line. For the first time to our knowledge, a theoretical model is developed to describe the transmission regime with a quasilinear pulse evolution along the transmission line and the point action of NODs. We formulate the mapping problem for pulse propagation in a unit cell of the line and show that in the particular application to nonlinear optical loop mirrors, the steady-state pulse characteristics predicted by the theory accurately reproduce the results of direct numerical simulations. © 2005 Springer Science+Business Media, Inc.
Resumo:
This PhD thesis analyses networks of knowledge flows, focusing on the role of indirect ties in the knowledge transfer, knowledge accumulation and knowledge creation process. It extends and improves existing methods for mapping networks of knowledge flows in two different applications and contributes to two stream of research. To support the underlying idea of this thesis, which is finding an alternative method to rank indirect network ties to shed a new light on the dynamics of knowledge transfer, we apply Ordered Weighted Averaging (OWA) to two different network contexts. Knowledge flows in patent citation networks and a company supply chain network are analysed using Social Network Analysis (SNA) and the OWA operator. The OWA is used here for the first time (i) to rank indirect citations in patent networks, providing new insight into their role in transferring knowledge among network nodes; and to analyse a long chain of patent generations along 13 years; (ii) to rank indirect relations in a company supply chain network, to shed light on the role of indirectly connected individuals involved in the knowledge transfer and creation processes and to contribute to the literature on knowledge management in a supply chain. In doing so, indirect ties are measured and their role as means of knowledge transfer is shown. Thus, this thesis represents a first attempt to bridge the OWA and SNA fields and to show that the two methods can be used together to enrich the understanding of the role of indirectly connected nodes in a network. More specifically, the OWA scores enrich our understanding of knowledge evolution over time within complex networks. Future research can show the usefulness of OWA operator in different complex networks, such as the on-line social networks that consists of thousand of nodes.
Resumo:
This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information.