11 resultados para information systems evaluation
em Universidad de Alicante
Resumo:
In the light of the growing interest raised by Information Systems Offshore Outsourcing both in the managerial world and in the academic arena, the present work carries out a revision of the research in this area. We have analysed 89 research articles on this topic published in 17 prestigious journals. The analysis deals with aspects such as research methodologies, level of analysis in the studies, data perspective, economic theories used or location of vendors and clients of these services; and it additionally identifies the most frequent topics in this field as well as the most prolific authors and countries. Although other reviews about the research in this area have been published, the present paper achieves a greater level of detail than previous works. The review of the literature in the area could have interesting implications not only for academics but also for business practice.
Resumo:
The technology innovation (TI) in the health system has been confirmed as the best format to share information and to facilitate the communication between all the actors involved in the chronic patients´ care.
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
Purpose – The purpose of this paper is to analyse Information Systems outsourcing success, measuring the latter according to the satisfaction level achieved by users and taking into account three success factors: the role played by the client firm’s top management; the relationships between client and provider; and the degree of outsourcing. Design/methodology/approach – A survey was carried out by means of a questionnaire answered by 398 large Spanish firms. Its results were examined using the partial least squares software and through the proposal of a structural equation model. Findings – The conclusions reveal that the perceived benefits play a mediating role in outsourcing satisfaction and also that these benefits can be grouped together into three categories: strategic; economic; and technological ones. Originality/value – The study identifies how some success factors will be more influent than others depending which type of benefits are ultimately sought with outsourcing.
Resumo:
Despite the proliferation of academic research on information systems outsourcing, not many studies analyze the characteristics of outsourcing contracts. This research aims to provide an in-depth description of information systems outsourcing. An additional objective is to examine how these characteristics evolve over time. Finally, this study reports on the usefulness of measuring such characteristics over time to assess the maturity level of the information systems outsourcing. This study gathers the data from the responses of the information systems managers of the largest Spanish firms to a questionnaire. This longitudinal study covers 12 years of research and compares authors' previous research results with the results of this study.
Resumo:
Los Sistemas de Información Geográfica nos permiten estudiar la evolución en el tiempo de cualquier fenómeno o hecho físico que se pueda referenciar geográficamente. En el presente trabajo se realiza un estudio, mediante un Sistema de Información Geográfica, del desarrollo industrial de la Ciudad de Alcoy en el P. G. O. U. de 1957. En el tiempo de duración de este plan, que abarca un período de 32 años, con una única revisión en 1982, la ciudad ha sufrido grandes transformaciones económicas, sociales, industriales y urbanísticas. El trabajo pretende, por una parte, elaborar la cartografía de la evolución que ha sufrido la localización de la industria alcoyana y realizar un análisis en el que quede de manifiesto la política industrial llevada a cabo por las Administraciones y las consecuencias que ha tenido para el desarrollo de la ciudad. En segundo lugar, se pretende estudiar las posibilidades de una aplicación GIS como GeoMedia en la realización de dicho estudio, así como analizar el proceso para la realización del trabajo: digitalización de mapas, referenciación geográfica, utilización de mapas digitales, definición de entidades y clases de entidad, bases de datos a utilizar, consultas a realizar etc.
Resumo:
In the last few years, there has been a wide development in the research on textual information systems. The goal is to improve these systems in order to allow an easy localization, treatment and access to the information stored in digital format (Digital Databases, Documental Databases, and so on). There are lots of applications focused on information access (for example, Web-search systems like Google or Altavista). However, these applications have problems when they must access to cross-language information, or when they need to show information in a language different from the one of the query. This paper explores the use of syntactic-sematic patterns as a method to access to multilingual information, and revise, in the case of Information Retrieval, where it is possible and useful to employ patterns when it comes to the multilingual and interactive aspects. On the one hand, the multilingual aspects that are going to be studied are the ones related to the access to documents in different languages from the one of the query, as well as the automatic translation of the document, i.e. a machine translation system based on patterns. On the other hand, this paper is going to go deep into the interactive aspects related to the reformulation of a query based on the syntactic-semantic pattern of the request.
Resumo:
In this paper we explore the use of semantic classes in an existing information retrieval system in order to improve its results. Thus, we use two different ontologies of semantic classes (WordNet domain and Basic Level Concepts) in order to re-rank the retrieved documents and obtain better recall and precision. Finally, we implement a new method for weighting the expanded terms taking into account the weights of the original query terms and their relations in WordNet with respect to the new ones (which have demonstrated to improve the results). The evaluation of these approaches was carried out in the CLEF Robust-WSD Task, obtaining an improvement of 1.8% in GMAP for the semantic classes approach and 10% in MAP employing the WordNet term weighting approach.
Resumo:
The exponential increase of subjective, user-generated content since the birth of the Social Web, has led to the necessity of developing automatic text processing systems able to extract, process and present relevant knowledge. In this paper, we tackle the Opinion Retrieval, Mining and Summarization task, by proposing a unified framework, composed of three crucial components (information retrieval, opinion mining and text summarization) that allow the retrieval, classification and summarization of subjective information. An extensive analysis is conducted, where different configurations of the framework are suggested and analyzed, in order to determine which is the best one, and under which conditions. The evaluation carried out and the results obtained show the appropriateness of the individual components, as well as the framework as a whole. By achieving an improvement over 10% compared to the state-of-the-art approaches in the context of blogs, we can conclude that subjective text can be efficiently dealt with by means of our proposed framework.
Resumo:
This article analyzes the appropriateness of a text summarization system, COMPENDIUM, for generating abstracts of biomedical papers. Two approaches are suggested: an extractive (COMPENDIUM E), which only selects and extracts the most relevant sentences of the documents, and an abstractive-oriented one (COMPENDIUM E–A), thus facing also the challenge of abstractive summarization. This novel strategy combines extractive information, with some pieces of information of the article that have been previously compressed or fused. Specifically, in this article, we want to study: i) whether COMPENDIUM produces good summaries in the biomedical domain; ii) which summarization approach is more suitable; and iii) the opinion of real users towards automatic summaries. Therefore, two types of evaluation were performed: quantitative and qualitative, for evaluating both the information contained in the summaries, as well as the user satisfaction. Results show that extractive and abstractive-oriented summaries perform similarly as far as the information they contain, so both approaches are able to keep the relevant information of the source documents, but the latter is more appropriate from a human perspective, when a user satisfaction assessment is carried out. This also confirms the suitability of our suggested approach for generating summaries following an abstractive-oriented paradigm.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.