775 resultados para process-aware information systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese apresentada como requisito parcial para obtenção do grau de Doutor em Gestão de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article proposes a methodology to address the urban evolutionary process, demonstrating how it is reflected in literature. It focuses on “literary space,” presented as a territory defined by the period setting or as evoked by the characters, which can be georeferenced and drawn on a map. It identifies the different locations of literary space in relation to urban development and the economic, political, and social context of the city. We suggest a new approach for mapping a relatively comprehensive body of literature by combining literary criticism, urban history, and geographic information systems (GIS). The home-range concept, used in animal ecology, has been adapted to reveal the size and location of literary space. This interdisciplinary methodology is applied in a case study to nineteenth- and twentieth-century novels involving the city of Lisbon. The developing concepts of cumulative literary space and common literary space introduce size calculations in addition to location and structure, previously developed by other researchers. Sequential and overlapping analyses of literary space throughout time have the advantage of presenting comparable and repeatable results for other researchers using a different body of literary works or studying another city. Results show how city changes shaped perceptions of the urban space as it was lived and experienced. A small core area, correspondent to a part of the city center, persists as literary space in all the novels analyzed. Furthermore, the literary space does not match the urban evolution. There is a time lag for embedding new urbanized areas in the imagined literary scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software development is a discipline that is almost as old as the history of computers. With the advent of the Internet and all of its related technologies, software development has been on high demand. But, and especially in SME (small and medium enterprise), this was not accompanied with a comparable effort to develop a set of sustainable and standardized activities of project management, which lead to increasing inefficiencies and costs. Given the actual economic situation, it makes sense to engage in an effort to reduce said inefficiencies and rising costs. For that end, this work will analyze the current state of software development’s project management processes on a Portuguese SME, along with its problems and inefficiencies in an effort to create a standardized model to manage software development, with special attention given to critical success factors in an agile software development environment, while using the best practices in process modeling. This work also aims to create guidelines to correctly integrate these changes in the existing IS structure of a company.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The reduction of greenhouse gas emissions is one of the big global challenges for the next decades due to its severe impact on the atmosphere that leads to a change in the climate and other environmental factors. One of the main sources of greenhouse gas is energy consumption, therefore a number of initiatives and calls for awareness and sustainability in energy use are issued among different types of institutional and organizations. The European Council adopted in 2007 energy and climate change objectives for 20% improvement until 2020. All European countries are required to use energy with more efficiency. Several steps could be conducted for energy reduction: understanding the buildings behavior through time, revealing the factors that influence the consumption, applying the right measurement for reduction and sustainability, visualizing the hidden connection between our daily habits impacts on the natural world and promoting to more sustainable life. Researchers have suggested that feedback visualization can effectively encourage conservation with energy reduction rate of 18%. Furthermore, researchers have contributed to the identification process of a set of factors which are very likely to influence consumption. Such as occupancy level, occupants behavior, environmental conditions, building thermal envelope, climate zones, etc. Nowadays, the amount of energy consumption at the university campuses are huge and it needs great effort to meet the reduction requested by European Council as well as the cost reduction. Thus, the present study was performed on the university buildings as a use case to: a. Investigate the most dynamic influence factors on energy consumption in campus; b. Implement prediction model for electricity consumption using different techniques, such as the traditional regression way and the alternative machine learning techniques; and c. Assist energy management by providing a real time energy feedback and visualization in campus for more awareness and better decision making. This methodology is implemented to the use case of University Jaume I (UJI), located in Castellon, Spain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RESUMO: Temos assistido a uma evolução impressionante nos laboratórios de análises clínicas, os quais precisam de prestar um serviço de excelência a custos cada vez mais competitivos. Nos laboratórios os sistemas de gestão da qualidade têm uma importância significativa nesta evolução, fundamentalmente pela procura da melhoria continua, que ocorre não só ao nível de processos e técnicas, mas também na qualificação dos diferentes intervenientes. Um dos problemas fundamentais da gestão e um laboratório é a eliminação de desperdícios e erros criando benefícios, conceito base na filosofia LeanThinking isto é “pensamento magro”, pelo que é essencial conseguir monitorizar funções críticas sistematicamente. Esta monitorização, num laboratório cada vez mais focalizado no utente, pode ser efetuada através de sistemas e tecnologias de informação, sendo possível contabilizar número de utentes, horas de maior afluência, tempo médio de permanência na sala de espera, tempo médio para entrega de análises, resultados entregues fora da data prevista, entre outros dados de apoio à decisão. Devem igualmente ser analisadas as reclamações, bem como a satisfação dos utentes quer através do feedback que é transmitido aos funcionários, quer através de questionários de satisfação. Usou-se principalmente dois modelos: um proposto pelo Índice Europeu de Satisfação do Consumidor (ECSI) e o outro de Estrutura Comum de Avaliação (CAF). Introduziram-se igualmente dois questionários: um apresentado em formato digital num posto de colheitas, através de um quiosque eletrónico, e um outro na página da internet do laboratório, ambos como alternativa ao questionário em papel já existente, tendo-se analisado os dados, e retirado as devidas conclusões. Propôs-se e desenvolveu-se um questionário para colaboradores cuja intenção foi a de fornecer dados úteis de apoio à decisão, face à importância dos funcionários na interação com os clientes e na garantia da qualidade ao longo de todo o processo. Avaliaram-se globalmente os resultados sem que tenha sido possível apresentá-los por política interna da empresa, bem como se comentou de forma empírica alguns benefícios deste questionário. Os principais objetivos deste trabalho foram, implementar questionários de satisfação eletrónicos e analisar os resultados obtidos, comparando-os com o estudo ECSI, de forma a acentuar a importância da análise em simultâneo de dois fatores: a motivação profissional e a satisfação do cliente, com o intuito de melhorar os sistemas de apoio à decisão. ------------------------ ABSTRACT: We have witnessed an impressive development in clinical analysis laboratories, which have to provide excellent service at increasingly competitive costs, quality management systems have a significant importance in this evolution, mainly by demanding continuous improvement, which does not occur only in terms of processes and techniques, but also in the qualification of the various stakeholders. One key problem of managing a laboratory is the elimination of waste and errors, creating benefits, concept based on Lean Thinking philosophy, therefore it is essential be able to monitor critical tasks systematically. This monitoring, in an increasingly focused on the user laboratory can be accomplished through information systems and technologies, through which it is possible to account the number of clients, peak times, average length of waiting room stay, average time for delivery analysis, delivered results out of the expected date, among other data that contribute to support decisions, however it is also decisive to analyzed complaint sand satisfaction of users through employees feedback but mainly through satisfaction questionnaires that provides accurate results. We use mainly two models one proposed by the European Index of Consumer Satisfaction (ECSI), directed to the client, and the Common Assessment Framework (CAF), used both in the client as the employees surveys. Introduced two questionnaires in a digital format, one in the central laboratory collect center, through an electronic kiosk and another on the laboratory web page, both as an alternative to survey paper currently used, we analyzed the results, and withdrew the conclusions. It was proposed and developed a questionnaire for employees whose intention would be to provide useful data to decision support, given the importance of employees in customer interaction and quality assurance throughout the whole clinical process, it was evaluated in a general way because it was not possible to show the results, however commented an empirical way some benefits of this questionnaire. The main goals of this study were to implement electronic questionnaires and analyze the results, comparing them with the ECSI, in order to emphasize the importance of analyzing simultaneously professional motivation with customer satisfaction, in order to improve decision support systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RESUMO - Os registos de enfermagem no Centro Hospitalar Lisboa Norte, E.P.E. (CHLN) são feitos em suporte de papel ou através de sistemas de informação (SI) próprios de cada serviço, com a utilização de várias aplicações como o Alert, Picis, etc. Esta diversidade gera alguns constrangimentos em termos de fluxo de informação, em virtude da falta de interoperabilidade dos respetivos sistemas. Esta realidade pode ter impactos na área da qualidade e segurança do utente, com a possibilidade de ocorrência de erros e/ou eventos adversos. Podem ainda ser notórios na área da privacidade e confidencialidade dos dados clínicos, na tomada de decisão, na gestão clínica e financeira e na produção de informação útil para a investigação científica. No CHLN está em curso a implementação de um SI capaz de dar resposta aos registos de enfermagem, integrados num registo de saúde eletrónico focado no utente que obedece à metodologia do processo de enfermagem e utiliza a linguagem codificada da Classificação Internacional para a Prática de Enfermagem (CIPE). Com o desenvolvimento desta investigação, devidamente autorizada pelo Conselho de Administração do CHLN, pretendeu-se dar resposta à pergunta de partida:  Estarão os enfermeiros, utilizadores do Desktop de Enfermagem do CHLN, satisfeitos com esse sistema de informação? Com esse propósito, foi elaborada uma abordagem exploratória com recurso a pesquisa bibliográfica sobre os sistemas de informação de enfermagem e a sua avaliação, com base no “Modelo de Sucesso dos Sistemas de Informação de DeLone e McLean”, tendo sido desenvolvido um estudo de caso com uma abordagem quantitativa, mediante a aplicação de um inquérito por questionário aos 262 enfermeiros do CHLN, nos serviços onde já utilizavam o referido SI, entre maio e junho de 2014, com uma taxa de resposta de 84%. Os resultados da aplicação do questionário, objeto de análise estatística univariada e bivariada com recurso a procedimentos descritivos e inferenciais, visando a produção de sínteses dirigidas aos objetivos do estudo, permitiram caracterizar o nível de satisfação dos enfermeiros, enquanto utilizadores do “desktop de enfermagem”, suportados por Tecnologias de Informação e Comunicação. Na escala utilizada (de 1 a 5), o nível médio de satisfação global (2,78) foi ligeiramente inferior ao seu ponto médio (3). No entanto, a maioria dos inquiridos (81,5%) não pretende abandonar o SI que utilizam. Os resultados obtidos permitem demonstrar que a satisfação dos enfermeiros face à implementação e utilização do SIE se trata de uma estratégia bem sucedida do CHLN, ainda que haja áreas onde foram evidenciados menores níveis de satisfação, tais como a “velocidade de processamento”, o “equipamento informático” e o “apoio técnico”, que podem ser alvo de uma maior atenção e reflexão pela gestão de topo, numa estratégia de melhoria contínua da qualidade, com importantes benefícios para a governação da instituição, para os profissionais e para os utentes, no futuro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RESUMO - As organizações de saúde, em geral, e os hospitais, em particular, são frequentemente reconhecidos por terem particularidades e especificidades que conferem uma especial complexidade ao seu processo produtivo e à sua gestão (Jacobs, 1974; Butler, 1995). Neste sentido, na literatura hospitalar emergem alguns temas como prioritários tanto na investigação como na avaliação do seu funcionamento, nomeadamente os relacionados com a produção, com o financiamento, com a qualidade, com a eficiência e com a avaliação do seu desempenho. O estado da arte da avaliação do desempenho das organizações de saúde parece seguir a trilogia definida por Donabedian (1985) — Estrutura, Processo e Resultados. Existem diversas perspectivas para a avaliação do desempenho na óptica dos Resultados — efectividade, eficiência ou desempenho financeiro. No entanto, qualquer que seja a utilizada, o ajustamento pelo risco é necessário para se avaliar a actividade das organizações de saúde, como forma de medir as características dos doentes que podem influenciar os resultados de saúde. Como possíveis indicadores de resultados, existem a mortalidade (resultados finais), as complicações e as readmissões (resultados intermédios). Com excepção dos estudos realizados por Thomas (1996) e Thomas e Hofer (1998 e 1999), praticamente ninguém contesta a relação entre estes indicadores e a efectividade dos cuidados. Chamando, no entanto, a atenção para a necessidade de se definirem modelos de ajustamento pelo risco e ainda para algumas dificuldades conceptuais e operacionais para se atingir este objectivo. Em relação à eficiência técnica dos hospitais, os indicadores tradicionalmente mais utilizados para a sua avaliação são os custos médios e a demora média. Também neste domínio, a grande maioria dos estudos aponta para que a gravidade aumenta o poder justificativo do consumo de recursos e que o ajustamento pelo risco é útil para avaliar a eficiência dos hospitais. Em relação aos sistemas usados para medir a severidade e, consequentemente, ajustar pelo risco, o seu desenvolvimento apresenta, na generalidade, dois tipos de preocupações: a definição dos suportes de recolha da informação e a definição dos momentos de medição. Em última instância, o dilema que se coloca reside na definição de prioridades e daquilo que se pretende sacrificar. Quando se entende que os aspectos financeiros são determinantes, então será natural que se privilegie o recurso quase exclusivo a elementos dos resumos de alta como suporte de recolha da informação. Quando se defende que a validade de construção e de conteúdo é um aspecto a preservar, então o recurso aos elementos dos processos clínicos é inevitável. A definição dos momentos de medição dos dados tem repercussões em dois níveis de análise: na neutralidade económica do sistema e na prospectividade do sistema. O impacto destas questões na avaliação da efectividade e da eficiência dos hospitais não é uma questão pacífica, visto que existem autores que defendem a utilização de modelos baseados nos resumos de alta, enquanto outros defendem a supremacia dos modelos baseados nos dados dos processos clínicos, para finalmente outros argumentarem que a utilização de uns ou outros é indiferente, pelo que o processo de escolha deve obedecer a critérios mais pragmáticos, como a sua exequibilidade e os respectivos custos de implementação e de exploração. Em relação às possibilidades que neste momento se colocam em Portugal para a utilização e aplicação de sistemas de ajustamento pelo risco, verifica-se que é praticamente impossível a curto prazo aplicar modelos com base em dados clínicos. Esta opção não deve impedir que a médio prazo se altere o sistema de informação dos hospitais, de forma a considerar a eventualidade de se utilizarem estes modelos. Existem diversos problemas quando se pretendem aplicar sistemas de ajustamento de risco a populações diferentes ou a subgrupos distintos das populações donde o sistema foi originalmente construído, existindo a necessidade de verificar o ajustamento do modelo à população em questão, em função da sua calibração e discriminação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hospitals are nowadays collecting vast amounts of data related with patient records. All this data hold valuable knowledge that can be used to improve hospital decision making. Data mining techniques aim precisely at the extraction of useful knowledge from raw data. This work describes an implementation of a medical data mining project approach based on the CRISP-DM methodology. Recent real-world data, from 2000 to 2013, were collected from a Portuguese hospital and related with inpatient hospitalization. The goal was to predict generic hospital Length Of Stay based on indicators that are commonly available at the hospitalization process (e.g., gender, age, episode type, medical specialty). At the data preparation stage, the data were cleaned and variables were selected and transformed, leading to 14 inputs. Next, at the modeling stage, a regression approach was adopted, where six learning methods were compared: Average Prediction, Multiple Regression, Decision Tree, Artificial Neural Network ensemble, Support Vector Machine and Random Forest. The best learning model was obtained by the Random Forest method, which presents a high quality coefficient of determination value (0.81). This model was then opened by using a sensitivity analysis procedure that revealed three influential input attributes: the hospital episode type, the physical service where the patient is hospitalized and the associated medical specialty. Such extracted knowledge confirmed that the obtained predictive model is credible and with potential value for supporting decisions of hospital managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information security is concerned with the protection of information, which can be stored, processed or transmitted within critical information systems of the organizations, against loss of confidentiality, integrity or availability. Protection measures to prevent these problems result through the implementation of controls at several dimensions: technical, administrative or physical. A vital objective for military organizations is to ensure superiority in contexts of information warfare and competitive intelligence. Therefore, the problem of information security in military organizations has been a topic of intensive work at both national and transnational levels, and extensive conceptual and standardization work is being produced. A current effort is therefore to develop automated decision support systems to assist military decision makers, at different levels in the command chain, to provide suitable control measures that can effectively deal with potential attacks and, at the same time, prevent, detect and contain vulnerabilities targeted at their information systems. The concept and processes of the Case-Based Reasoning (CBR) methodology outstandingly resembles classical military processes and doctrine, in particular the analysis of “lessons learned” and definition of “modes of action”. Therefore, the present paper addresses the modeling and design of a CBR system with two key objectives: to support an effective response in context of information security for military organizations; to allow for scenario planning and analysis for training and auditing processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a proposal for a management model based on reliability requirements concerning Cloud Computing (CC). The proposal was based on a literature review focused on the problems, challenges and underway studies related to the safety and reliability of Information Systems (IS) in this technological environment. This literature review examined the existing obstacles and challenges from the point of view of respected authors on the subject. The main issues are addressed and structured as a model, called "Trust Model for Cloud Computing environment". This is a proactive proposal that purposes to organize and discuss management solutions for the CC environment, aiming improved reliability of the IS applications operation, for both providers and their customers. On the other hand and central to trust, one of the CC challenges is the development of models for mutual audit management agreements, so that a formal relationship can be established involving the relevant legal responsibilities. To establish and control the appropriate contractual requirements, it is necessary to adopt technologies that can collect the data needed to inform risk decisions, such as access usage, security controls, location and other references related to the use of the service. In this process, the cloud service providers and consumers themselves must have metrics and controls to support cloud-use management in compliance with the SLAs agreed between the parties. The organization of these studies and its dissemination in the market as a conceptual model that is able to establish parameters to regulate a reliable relation between provider and user of IT services in CC environment is an interesting instrument to guide providers, developers and users in order to provide services and secure and reliable applications.