23 resultados para Computer Applications, Computer Skills, Project Managers, Training


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

RESUMO: O Registo de Saúde Electrónico (RSE) detém uma importância vital para a melhoria dos cuidados e segurança do doente, para o acesso à sua informação, por profissionais de saúde, independentemente do momento e local de prestação dos cuidados clínicos, para a garantia da confidencialidade dos dados e para a redução da despesa dos serviços de saúde. É com base nesta sua importância que, no âmbito do Mestrado em Gestão da Saúde, da Escola Nacional de Saúde Pública, desenvolvemos um trabalho de investigação, que tem como objectivos descrever o “estado da arte” dos sistemas de informação em saúde e do RSE em Portugal, Europa e América do Norte, identificar a importância do RSE para os profissionais de saúde e para o doente, e avaliar a influência de determinados factores na aceitação do RSE por parte dos profissionais de saúde. Para certos autores, os factores condicionantes da aprovação do RSE podem ser: a idade, a formação, os conhecimentos informáticos, o tempo de exercício profissional e a compreensão dos benefícios do RSE por parte dos profissionais de saúde. Desta forma, elegemos estes factores para determinar se de facto são estes os que incitam a aceitação do RSE. O estudo foi dirigido a directores de serviço, médicos, enfermeiros e enfermeiroschefes, de cinco hospitais nacionais. Aos 20 participantes deste estudo foi aplicado um questionário, constituído por questões fechadas, questões factuais, de opinião e de informação. A metodologia utilizada foi do tipo descritivo e os dados foram analisados quantitativamente. Foi utilizado o coeficiente de Spearman para avaliar a existência de relação entre as variáveis, e com o seu uso foi possível depreender que: não há evidência de relação entre a idade e a aceitação do RSE; o tempo de exercício profissional não determina a aprovação do RSE; há evidência de relação entre os conhecimentos informáticos e a aceitação do RSE; a formação na área de digitalização de dados condiciona a aprovação do sistema; há evidência de relação entre a opinião dos profissionais de saúde acerca da actuação do RSE e a sua aceitação por parte destes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Informática

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ship tracking systems allow Maritime Organizations that are concerned with the Safety at Sea to obtain information on the current location and route of merchant vessels. Thanks to Space technology in recent years the geographical coverage of the ship tracking platforms has increased significantly, from radar based near-shore traffic monitoring towards a worldwide picture of the maritime traffic situation. The long-range tracking systems currently in operations allow the storage of ship position data over many years: a valuable source of knowledge about the shipping routes between different ocean regions. The outcome of this Master project is a software prototype for the estimation of the most operated shipping route between any two geographical locations. The analysis is based on the historical ship positions acquired with long-range tracking systems. The proposed approach makes use of a Genetic Algorithm applied on a training set of relevant ship positions extracted from the long-term storage tracking database of the European Maritime Safety Agency (EMSA). The analysis of some representative shipping routes is presented and the quality of the results and their operational applications are assessed by a Maritime Safety expert.