945 resultados para Information-Analytical System
Resumo:
"June 30, 1987."
Resumo:
Mode of access: Internet.
Resumo:
"Made up of resumés and indexes of documents [of educational significance] ... numbered sequentially with ED prefixes and current Office of Education research projects [with EP prefixes]".
Resumo:
Automatic ontology building is a vital issue in many fields where they are currently built manually. This paper presents a user-centred methodology for ontology construction based on the use of Machine Learning and Natural Language Processing. In our approach, the user selects a corpus of texts and sketches a preliminary ontology (or selects an existing one) for a domain with a preliminary vocabulary associated to the elements in the ontology (lexicalisations). Examples of sentences involving such lexicalisation (e.g. ISA relation) in the corpus are automatically retrieved by the system. Retrieved examples are validated by the user and used by an adaptive Information Extraction system to generate patterns that discover other lexicalisations of the same objects in the ontology, possibly identifying new concepts or relations. New instances are added to the existing ontology or used to tune it. This process is repeated until a satisfactory ontology is obtained. The methodology largely automates the ontology construction process and the output is an ontology with an associated trained leaner to be used for further ontology modifications.
Resumo:
Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.
Resumo:
Original Paper European Journal of Information Systems (2001) 10, 135–146; doi:10.1057/palgrave.ejis.3000394 Organisational learning—a critical systems thinking discipline P Panagiotidis1,3 and J S Edwards2,4 1Deloitte and Touche, Athens, Greece 2Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK Correspondence: Dr J S Edwards, Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET, UK. E-mail: j.s.edwards@aston.ac.uk 3Petros Panagiotidis is Manager responsible for the Process and Systems Integrity Services of Deloitte and Touche in Athens, Greece. He has a BSc in Business Administration and an MSc in Management Information Systems from Western International University, Phoenix, Arizona, USA; an MSc in Business Systems Analysis and Design from City University, London, UK; and a PhD degree from Aston University, Birmingham, UK. His doctorate was in Business Systems Analysis and Design. His principal interests now are in the ERP/DSS field, where he serves as project leader and project risk managment leader in the implementation of SAP and JD Edwards/Cognos in various major clients in the telecommunications and manufacturing sectors. In addition, he is responsible for the development and application of knowledge management systems and activity-based costing systems. 4John S Edwards is Senior Lecturer in Operational Research and Systems at Aston Business School, Birmingham, UK. He holds MA and PhD degrees (in mathematics and operational research respectively) from Cambridge University. His principal research interests are in knowledge management and decision support, especially methods and processes for system development. He has written more than 30 research papers on these topics, and two books, Building Knowledge-based Systems and Decision Making with Computers, both published by Pitman. Current research work includes the effect of scale of operations on knowledge management, interfacing expert systems with simulation models, process modelling in law and legal services, and a study of the use of artifical intelligence techniques in management accounting. Top of pageAbstract This paper deals with the application of critical systems thinking in the domain of organisational learning and knowledge management. Its viewpoint is that deep organisational learning only takes place when the business systems' stakeholders reflect on their actions and thus inquire about their purpose(s) in relation to the business system and the other stakeholders they perceive to exist. This is done by reflecting both on the sources of motivation and/or deception that are contained in their purpose, and also on the sources of collective motivation and/or deception that are contained in the business system's purpose. The development of an organisational information system that captures, manages and institutionalises meaningful information—a knowledge management system—cannot be separated from organisational learning practices, since it should be the result of these very practices. Although Senge's five disciplines provide a useful starting-point in looking at organisational learning, we argue for a critical systems approach, instead of an uncritical Systems Dynamics one that concentrates only on the organisational learning practices. We proceed to outline a methodology called Business Systems Purpose Analysis (BSPA) that offers a participatory structure for team and organisational learning, upon which the stakeholders can take legitimate action that is based on the force of the better argument. In addition, the organisational learning process in BSPA leads to the development of an intrinsically motivated information organisational system that allows for the institutionalisation of the learning process itself in the form of an organisational knowledge management system. This could be a specific application, or something as wide-ranging as an Enterprise Resource Planning (ERP) implementation. Examples of the use of BSPA in two ERP implementations are presented.
Resumo:
The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.
Resumo:
Objective: Biomedical events extraction concerns about events describing changes on the state of bio-molecules from literature. Comparing to the protein-protein interactions (PPIs) extraction task which often only involves the extraction of binary relations between two proteins, biomedical events extraction is much harder since it needs to deal with complex events consisting of embedded or hierarchical relations among proteins, events, and their textual triggers. In this paper, we propose an information extraction system based on the hidden vector state (HVS) model, called HVS-BioEvent, for biomedical events extraction, and investigate its capability in extracting complex events. Methods and material: HVS has been previously employed for extracting PPIs. In HVS-BioEvent, we propose an automated way to generate abstract annotations for HVS training and further propose novel machine learning approaches for event trigger words identification, and for biomedical events extraction from the HVS parse results. Results: Our proposed system achieves an F-score of 49.57% on the corpus used in the BioNLP'09 shared task, which is only 2.38% lower than the best performing system by UTurku in the BioNLP'09 shared task. Nevertheless, HVS-BioEvent outperforms UTurku's system on complex events extraction with 36.57% vs. 30.52% being achieved for extracting regulation events, and 40.61% vs. 38.99% for negative regulation events. Conclusions: The results suggest that the HVS model with the hierarchical hidden state structure is indeed more suitable for complex event extraction since it could naturally model embedded structural context in sentences.
Resumo:
A major challenge in text mining for biomedicine is automatically extracting protein-protein interactions from the vast amount of biomedical literature. We have constructed an information extraction system based on the Hidden Vector State (HVS) model for protein-protein interactions. The HVS model can be trained using only lightly annotated data whilst simultaneously retaining sufficient ability to capture the hierarchical structure. When applied in extracting protein-protein interactions, we found that it performed better than other established statistical methods and achieved 61.5% in F-score with balanced recall and precision values. Moreover, the statistical nature of the pure data-driven HVS model makes it intrinsically robust and it can be easily adapted to other domains.
Resumo:
Large-scale disasters are constantly occurring around the world, and in many cases evacuation of regions of city is needed. ‘Operational Research/Management Science’ (OR/MS) has been widely used in emergency planning for over five decades. Warning dissemination, evacuee transportation and shelter management are three ‘Evacuation Support Functions’ (ESF) generic to many hazards. This thesis has adopted a case study approach to illustrate the importance of integrated approach of evacuation planning and particularly the role of OR/MS models. In the warning dissemination phase, uncertainty in the household’s behaviour as ‘warning informants’ has been investigated along with uncertainties in the warning system. An agentbased model (ABM) was developed for ESF-1 with households as agents and ‘warning informants’ behaviour as the agent behaviour. The model was used to study warning dissemination effectiveness under various conditions of the official channel. In the transportation phase, uncertainties in the household’s behaviour such as departure time (a function of ESF-1), means of transport and destination have been. Households could evacuate as pedestrians, using car or evacuation buses. An ABM was developed to study the evacuation performance (measured in evacuation travel time). In this thesis, a holistic approach for planning the public evacuation shelters called ‘Shelter Information Management System’ (SIMS) has been developed. A generic allocation framework of was developed to available shelter capacity to the shelter demand by considering the evacuation travel time. This was formulated using integer programming. In the sheltering phase, the uncertainty in household shelter choices (either nearest/allocated/convenient) has been studied for its impact on allocation policies using sensitivity analyses. Using analyses from the models and detailed examination of household states from ‘warning to safety’, it was found that the three ESFs though sequential in time, however have lot of interdependencies from the perspective of evacuation planning. This thesis has illustrated an OR/MS based integrated approach including and beyond single ESF preparedness. The developed approach will help in understanding the inter-linkages of the three evacuation phases and preparing a multi-agency-based evacuation planning evacuation
Resumo:
Hospitals everywhere are integrating health data using electronic health record (EHR) systems, and disparate and multimedia patient data can be input by different caregivers at different locations as encapsulated patient profiles. Healthcare institutions are also using the flexibility and speed of wireless computing to improve quality and reduce costs. We are developing a mobile application that allows doctors to efficiently record and access complete and accurate real-time patient information. The system integrates medical imagery with textual patient profiles as well as expert interactions by healthcare personnel using knowledge management and case-based reasoning techniques. The application can assist other caregivers in searching large repositories of previous patient cases. Patients' symptoms can be input to a portable device and the application can quickly retrieve similar profiles which can be used to support effective diagnoses and prognoses by comparing symptoms, treatments, diagnosis, test results and other patient information. © 2007 Sage Publications.
Resumo:
The Indian Petroleum Industry is passing through a very dynamic business environment due to liberalization. Effective project management for developing new infrastructures and maintaining the existing facilities has been considered as one of the means for remaining competitive but these practices suffer from many shortcomings, as time, cost and quality non-achievements are part and parcel of almost every project. This study focuses on identifying the specific causes of project failure by demonstrating first the characteristics of projects in Indian Petroleum industry and suggests some remedial measures for resolving these issues. The suggested project management model is integrated through information management system and demonstrated through a case study.
Resumo:
This paper presents an adaptive method using genetic algorithm to modify user’s queries, based on relevance judgments. This algorithm was adapted for the three well-known documents collections (CISI, NLP and CACM). The method is shown to be applicable to large text collections, where more relevant documents are presented to users in the genetic modification. The algorithm shows the effects of applying GA to improve the effectiveness of queries in IR systems. Further studies are planned to adjust the system parameters to improve its effectiveness. The goal is to retrieve most relevant documents with less number of non-relevant documents with respect to user's query in information retrieval system using genetic algorithm.
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^