14 resultados para Process-aware information systems, Work list visualisation, YAWL
em Universidad de Alicante
Resumo:
In the light of the growing interest raised by Information Systems Offshore Outsourcing both in the managerial world and in the academic arena, the present work carries out a revision of the research in this area. We have analysed 89 research articles on this topic published in 17 prestigious journals. The analysis deals with aspects such as research methodologies, level of analysis in the studies, data perspective, economic theories used or location of vendors and clients of these services; and it additionally identifies the most frequent topics in this field as well as the most prolific authors and countries. Although other reviews about the research in this area have been published, the present paper achieves a greater level of detail than previous works. The review of the literature in the area could have interesting implications not only for academics but also for business practice.
Resumo:
The technology innovation (TI) in the health system has been confirmed as the best format to share information and to facilitate the communication between all the actors involved in the chronic patients´ care.
Resumo:
This introduction provides an overview of the state-of-the-art technology in Applications of Natural Language to Information Systems. Specifically, we analyze the need for such technologies to successfully address the new challenges of modern information systems, in which the exploitation of the Web as a main data source on business systems becomes a key requirement. It will also discuss the reasons why Human Language Technologies themselves have shifted their focus onto new areas of interest very directly linked to the development of technology for the treatment and understanding of Web 2.0. These new technologies are expected to be future interfaces for the new information systems to come. Moreover, we will review current topics of interest to this research community, and will present the selection of manuscripts that have been chosen by the program committee of the NLDB 2011 conference as representative cornerstone research works, especially highlighting their contribution to the advancement of such technologies.
Resumo:
Purpose – The purpose of this paper is to analyse Information Systems outsourcing success, measuring the latter according to the satisfaction level achieved by users and taking into account three success factors: the role played by the client firm’s top management; the relationships between client and provider; and the degree of outsourcing. Design/methodology/approach – A survey was carried out by means of a questionnaire answered by 398 large Spanish firms. Its results were examined using the partial least squares software and through the proposal of a structural equation model. Findings – The conclusions reveal that the perceived benefits play a mediating role in outsourcing satisfaction and also that these benefits can be grouped together into three categories: strategic; economic; and technological ones. Originality/value – The study identifies how some success factors will be more influent than others depending which type of benefits are ultimately sought with outsourcing.
Resumo:
Despite the proliferation of academic research on information systems outsourcing, not many studies analyze the characteristics of outsourcing contracts. This research aims to provide an in-depth description of information systems outsourcing. An additional objective is to examine how these characteristics evolve over time. Finally, this study reports on the usefulness of measuring such characteristics over time to assess the maturity level of the information systems outsourcing. This study gathers the data from the responses of the information systems managers of the largest Spanish firms to a questionnaire. This longitudinal study covers 12 years of research and compares authors' previous research results with the results of this study.
Resumo:
Geographic knowledge discovery (GKD) is the process of extracting information and knowledge from massive georeferenced databases. Usually the process is accomplished by two different systems, the Geographic Information Systems (GIS) and the data mining engines. However, the development of those systems is a complex task due to it does not follow a systematic, integrated and standard methodology. To overcome these pitfalls, in this paper, we propose a modeling framework that addresses the development of the different parts of a multilayer GKD process. The main advantages of our framework are that: (i) it reduces the design effort, (ii) it improves quality systems obtained, (iii) it is independent of platforms, (iv) it facilitates the use of data mining techniques on geo-referenced data, and finally, (v) it ameliorates the communication between different users.
Resumo:
Los Sistemas de Información Geográfica nos permiten estudiar la evolución en el tiempo de cualquier fenómeno o hecho físico que se pueda referenciar geográficamente. En el presente trabajo se realiza un estudio, mediante un Sistema de Información Geográfica, del desarrollo industrial de la Ciudad de Alcoy en el P. G. O. U. de 1957. En el tiempo de duración de este plan, que abarca un período de 32 años, con una única revisión en 1982, la ciudad ha sufrido grandes transformaciones económicas, sociales, industriales y urbanísticas. El trabajo pretende, por una parte, elaborar la cartografía de la evolución que ha sufrido la localización de la industria alcoyana y realizar un análisis en el que quede de manifiesto la política industrial llevada a cabo por las Administraciones y las consecuencias que ha tenido para el desarrollo de la ciudad. En segundo lugar, se pretende estudiar las posibilidades de una aplicación GIS como GeoMedia en la realización de dicho estudio, así como analizar el proceso para la realización del trabajo: digitalización de mapas, referenciación geográfica, utilización de mapas digitales, definición de entidades y clases de entidad, bases de datos a utilizar, consultas a realizar etc.
Resumo:
The exponential increase of subjective, user-generated content since the birth of the Social Web, has led to the necessity of developing automatic text processing systems able to extract, process and present relevant knowledge. In this paper, we tackle the Opinion Retrieval, Mining and Summarization task, by proposing a unified framework, composed of three crucial components (information retrieval, opinion mining and text summarization) that allow the retrieval, classification and summarization of subjective information. An extensive analysis is conducted, where different configurations of the framework are suggested and analyzed, in order to determine which is the best one, and under which conditions. The evaluation carried out and the results obtained show the appropriateness of the individual components, as well as the framework as a whole. By achieving an improvement over 10% compared to the state-of-the-art approaches in the context of blogs, we can conclude that subjective text can be efficiently dealt with by means of our proposed framework.
Resumo:
Comunicación presentada en el 2nd International Workshop on Pattern Recognition in Information Systems, Alicante, April, 2002.
Resumo:
In this paper we address two issues. The first one analyzes whether the performance of a text summarization method depends on the topic of a document. The second one is concerned with how certain linguistic properties of a text may affect the performance of a number of automatic text summarization methods. For this we consider semantic analysis methods, such as textual entailment and anaphora resolution, and we study how they are related to proper noun, pronoun and noun ratios calculated over original documents that are grouped into related topics. Given the obtained results, we can conclude that although our first hypothesis is not supported, since it has been found no evident relationship between the topic of a document and the performance of the methods employed, adapting summarization systems to the linguistic properties of input documents benefits the process of summarization.
Resumo:
Business Intelligence (BI) applications have been gradually ported to the Web in search of a global platform for the consumption and publication of data and services. On the Internet, apart from techniques for data/knowledge management, BI Web applications need interfaces with a high level of interoperability (similar to the traditional desktop interfaces) for the visualisation of data/knowledge. In some cases, this has been provided by Rich Internet Applications (RIA). The development of these BI RIAs is a process traditionally performed manually and, given the complexity of the final application, it is a process which might be prone to errors. The application of model-driven engineering techniques can reduce the cost of development and maintenance (in terms of time and resources) of these applications, as they demonstrated by other types of Web applications. In the light of these issues, the paper introduces the Sm4RIA-B methodology, i.e., a model-driven methodology for the development of RIA as BI Web applications. In order to overcome the limitations of RIA regarding knowledge management from the Web, this paper also presents a new RIA platform for BI, called RI@BI, which extends the functionalities of traditional RIAs by means of Semantic Web technologies and B2B techniques. Finally, we evaluate the whole approach on a case study—the development of a social network site for an enterprise project manager.
Resumo:
Different types of land use are usually present in the areas adjacent to many shallow karst cavities. Over time, the increasing amount of potentially harmful matter and energy, of mainly anthropic origin or influence, that reaches the interior of a shallow karst cavity can modify the hypogeal ecosystem and increase the risk of damage to the Palaeolithic rock art often preserved within the cavity. This study proposes a new Protected Area status based on the geological processes that control these matter and energy fluxes into the Altamira cave karst system. Analysis of the geological characteristics of the shallow karst system shows that direct and lateral infiltration, internal water circulation, ventilation, gas exchange and transmission of vibrations are the processes that control these matter and energy fluxes into the cave. This study applies a comprehensive methodological approach based on Geographic Information Systems (GIS) to establish the area of influence of each transfer process. The stratigraphic and structural characteristics of the interior of the cave were determined using 3D Laser Scanning topography combined with classical field work, data gathering, cartography and a porosity–permeability analysis of host rock samples. As a result, it was possible to determine the hydrogeological behavior of the cave. In addition, by mapping and modeling the surface parameters it was possible to identify the main features restricting hydrological behavior and hence direct and lateral infiltration into the cave. These surface parameters included the shape of the drainage network and a geomorphological and structural characterization via digital terrain models. Geological and geomorphological maps and models integrated into the GIS environment defined the areas involved in gas exchange and ventilation processes. Likewise, areas that could potentially transmit vibrations directly into the cave were identified. This study shows that it is possible to define a Protected Area by quantifying the area of influence related to each transfer process. The combined maximum area of influence of all the processes will result in the new Protected Area. This area will thus encompass all the processes that account for most of the matter and energy carried into the cave and will fulfill the criteria used to define the Protected Area. This methodology is based on the spatial quantification of processes and entities of geological origin and can therefore be applied to any shallow karst system that requires protection.
Resumo:
The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.
Resumo:
The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.