847 resultados para information, knowledge
Resumo:
As information and communications technology (ICT) involves both traditional capital and knowledge capital, potential spillovers through various mechanisms can occur. We posit that ICT capital may boost productivity growth, not only in the home country, but also in other countries. In this paper, we provide empirical evidence of such spillovers using panel data on 37 countries from 1996 to 2004. Our results support the existence of ICT spillovers across country borders. Furthermore, we find that developing countries could reap more benefits from ICT spillovers than developed countries. This is particularly important for policy decisions regarding national trade liberalization and economic integration. Developing economies that are more open to foreign trade may have an economic advantage and may develop knowledge-intensive activities, which will lead to economic development in the long run.
Resumo:
Knowledge has been a subject of interest and inquiry for thousands of years since at least the time of the ancient Greeks, and no doubt even before that. “What is knowledge” continues to be an important topic of discussion in philosophy. More recently, interest in managing knowledge has grown in step with the perception that increasingly we live in a knowledge-based economy. Drucker (1969) is usually credited as being the first to popularize the knowledge-based economy concept by linking the importance of knowledge with rapid technological change in Drucker (1969). Karl Wiig coined the term knowledge management (hereafter KM) for a NATO seminar in 1986, and its popularity took off following the publication of Nonaka and Takeuchi’s book “The Knowledge Creating Company” (Nonaka & Takeuchi, 1995). Knowledge creation is in fact just one of many activities involved in KM. Others include sharing, retaining, refining, and using knowledge. There are many such lists of activities (Holsapple & Joshi, 2000; Probst, Raub, & Romhardt, 1999; Skyrme, 1999; Wiig, De Hoog, & Van der Spek, 1997). Both academic and practical interest in KM has continued to increase throughout the last decade. In this article, first the different types of knowledge are outlined, then comes a discussion of various routes by which knowledge management can be implemented, advocating a process-based route. An explanation follows of how people, processes, and technology need to fit together for effective KM, and some examples of this route in use are given. Finally, there is a look towards the future.
Resumo:
Background Atrial fibrillation (AF) patients with a high risk of stroke are recommended anticoagulation with warfarin. However, the benefit of warfarin is dependent upon time spent within the target therapeutic range (TTR) of their international normalised ratio (INR) (2.0 to 3.0). AF patients possess limited knowledge of their disease and warfarin treatment and this can impact on INR control. Education can improve patients' understanding of warfarin therapy and factors which affect INR control. Methods/Design Randomised controlled trial of an intensive educational intervention will consist of group sessions (between 2-8 patients) containing standardised information about the risks and benefits associated with OAC therapy, lifestyle interactions and the importance of monitoring and control of their International Normalised Ratio (INR). Information will be presented within an 'expert-patient' focussed DVD, revised educational booklet and patient worksheets. 200 warfarin-naïve patients who are eligible for warfarin will be randomised to either the intervention or usual care groups. All patients must have ECG-documented AF and be eligible for warfarin (according to the NICE AF guidelines). Exclusion criteria include: aged < 18 years old, contraindication(s) to warfarin, history of warfarin USE, valvular heart disease, cognitive impairment, are unable to speak/read English and disease likely to cause death within 12 months. Primary endpoint is time spent in TTR. Secondary endpoints include measures of quality of life (AF-QoL-18), anxiety and depression (HADS), knowledge of AF and anticoagulation, beliefs about medication (BMQ) and illness representations (IPQ-R). Clinical outcomes, including bleeding, stroke and interruption to anticoagulation will be recorded. All outcome measures will be assessed at baseline and 1, 2, 6 and 12 months post-intervention. Discussion More data is needed on the clinical benefit of educational intervention with AF patients receiving warfarin. Trial registration ISRCTN93952605
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
Retrospective clinical data presents many challenges for data mining and machine learning. The transcription of patient records from paper charts and subsequent manipulation of data often results in high volumes of noise as well as a loss of other important information. In addition, such datasets often fail to represent expert medical knowledge and reasoning in any explicit manner. In this research we describe applying data mining methods to retrospective clinical data to build a prediction model for asthma exacerbation severity for pediatric patients in the emergency department. Difficulties in building such a model forced us to investigate alternative strategies for analyzing and processing retrospective data. This paper describes this process together with an approach to mining retrospective clinical data by incorporating formalized external expert knowledge (secondary knowledge sources) into the classification task. This knowledge is used to partition the data into a number of coherent sets, where each set is explicitly described in terms of the secondary knowledge source. Instances from each set are then classified in a manner appropriate for the characteristics of the particular set. We present our methodology and outline a set of experiential results that demonstrate some advantages and some limitations of our approach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
The research is concerned with the terminological problems that computer users experience when they try to formulate their knowledge needs and attempt to access information contained in computer manuals or online help systems while building up their knowledge. This is the recognised but unresolved problem of communication between the specialist and the layman. The initial hypothesis was that computer users, through their knowledge of language, have some prior knowledge of the subdomain of computing they are trying to come to terms with, and that language can be a facilitating mechanism, or an obstacle, in the development of that knowledge. Related to this is the supposition that users have a conceptual apparatus based on both theoretical knowledge and experience of the world, and of several domains of special reference related to the environment in which they operate. The theoretical argument was developed by exploring the relationship between knowledge and language, and considering the efficacy of terms as agents of special subject knowledge representation. Having charted in a systematic way the territory of knowledge sources and types, we were able to establish that there are many aspects of knowledge which cannot be represented by terms. This submission is important, as it leads to the realisation that significant elements of knowledge are being disregarded in retrieval systems because they are normally expressed by language elements which do not enjoy the status of terms. Furthermore, we introduced the notion of `linguistic ease of retrieval' as a challenge to more conventional thinking which focuses on retrieval results.
Resumo:
Xerox Customer Engagement activity is informed by the "Go To Market" strategy, and "Intelligent Coverage" sales philosophy. The realisation of this philosophy necessitates a sophisticated level of Market Understanding, and the effective integration of the direct channels of Customer Engagement. Sophisticated Market Understanding requires the mapping and coding of the entire UK market at the DMU (Decision Making Unit) level, which in turn enables the creation of tailored coverage prescriptions. Effective Channel Integration is made possible by the organisation of Customer Engagement work according to a single, process defined structure: the Selling Process. Organising by process facilitates the discipline of Task Substitution, which leads logically to creation of Hybrid Selling models. Productive Customer Engagement requires Selling Process specialisation by industry sector, customer segment and product group. The research shows that Xerox's Market Database (MDB) plays a central role in delivering the Go To Market strategic aims. It is a tool for knowledge based selling, enables productive SFA (Sales Force Automation) and, in sum, is critical to the efficient and effective deployment of Customer Engagement resources. Intelligent Coverage is not possible without the MDB. Analysis of the case evidence has resulted in the definition of 60 idiographic statements. These statements are about how Xerox organise and manage three direct channels of Customer Engagement: Face to Face, Telebusiness and Ebusiness. Xerox is shown to employ a process-oriented, IT-enabled, holistic approach to Customer Engagement productivity. The significance of the research is that it represents a detailed (perhaps unequalled) level of rich description of the interplay between IT and a holistic, process-oriented management philosophy.
Resumo:
The drug information sources currently available to general practice pharmacists have been identified. The use of and attitudes to these sources were assessed as well as the perceived information needs of practising pharmacists. The special requirements of women pharmacists and pharmacists working part-time were studied. The relationship of the medical representative as an information source for pharmacists was evaluated. Participation in continuing education programmes as a vital means of ensuring current information awareness and knowledge for the practising profession has been considered. Investigations were mainly pursued by questionnaire survey, while computer facilities were used for the processing and the analyses of data. The desirability of collated and evaluated information from one or more independent authoritative sources has been discussed. The increasing advisory role of the general practice pharmacist and the needs of the patient and potential customer have been discussed, with projections for the pharmacist's future health care contribution.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.
Resumo:
This Thesis addresses the problem of automated false-positive free detection of epileptic events by the fusion of information extracted from simultaneously recorded electro-encephalographic (EEG) and the electrocardiographic (ECG) time-series. The approach relies on a biomedical case for the coupling of the Brain and Heart systems through the central autonomic network during temporal lobe epileptic events: neurovegetative manifestations associated with temporal lobe epileptic events consist of alterations to the cardiac rhythm. From a neurophysiological perspective, epileptic episodes are characterised by a loss of complexity of the state of the brain. The description of arrhythmias, from a probabilistic perspective, observed during temporal lobe epileptic events and the description of the complexity of the state of the brain, from an information theory perspective, are integrated in a fusion-of-information framework towards temporal lobe epileptic seizure detection. The main contributions of the Thesis include the introduction of a biomedical case for the coupling of the Brain and Heart systems during temporal lobe epileptic seizures, partially reported in the clinical literature; the investigation of measures for the characterisation of ictal events from the EEG time series towards their integration in a fusion-of-knowledge framework; the probabilistic description of arrhythmias observed during temporal lobe epileptic events towards their integration in a fusion-of-knowledge framework; and the investigation of the different levels of the fusion-of-information architecture at which to perform the combination of information extracted from the EEG and ECG time-series. The performance of the method designed in the Thesis for the false-positive free automated detection of epileptic events achieved a false-positives rate of zero on the dataset of long-term recordings used in the Thesis.
Resumo:
Information technology is at the centre of today’s business environment. The increasing importance of e-commerce and the integration of information systems in all areas of a business means it is crucial for managers to understand and implement IS (information systems). This major text, now in its second edition, provides the skills and knowledge necessary to choose the right systems, and to develop and manage them effectively. Business Information Systems: Technology, Development and Management assumes no prior knowledge of IS or IT, and emphasises the importance of IS to management decision making. It takes a 3 part structure: Part One covers hardware and software technologies; Part Two looks at information systems analysis and design; and Part Three describes the strategic management of IS. This successful format allows each section to be studied alongside individual modules, and enables students to focus clearly on specific areas and use the book for more than one course. This book is suitable for college students, undergraduate degree and postgraduate students taking courses with modules in the practical IT skills of selection, implementation, management and use of BIS. The practical sections are also of use to managers in industry involved in the development and use of IS.
Resumo:
This major text assumes no prior knowledge of IS or IT and builds both business and Information systems knowledge to enable the reader to choose the right systems, to develop them and to manage them effectively. The three-part structure to the book covers: Introduction to business information systems Business information systems development Business information systems management Suitable for any IS, BIS or MIS course from UG to MBA level within a Business or Computer Science Department.
Resumo:
A comprehensive introduction to the technology, development and management of business information systems. The book assumes no prior knowledge of IS or IT, so that new concepts and terms are defined as clearly as possible, with explanations in the text, and definitions at the margin. In this fast-moving area, the book covers both the crucial underpinnings of the subject as well as the most recent business and technology applications. It is written for students on any IS, BIS or MIS course from undergraduate to postgraduate and MBA level within a Business or Computer Science Department.
Resumo:
Services-led competitive strategies are critically important to Western manufacturers. This paper contributes to our basic knowledge of such strategies by examining the enabling information and communication technologies that successfully servitized manufacturers appear to be adopting. Although these are preliminary findings from a longer-term research programme, through this paper we seek to offer immediate assistance to manufacturers who wish to understand how they might exploit the servitization movement.