969 resultados para Medicine--Data processing
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
Data processing services for Meteosat geostationary satellite are presented. Implemented services correspond to the different levels of remote-sensing data processing, including noise reduction at preprocessing level, cloud mask extraction at low-level and fractal dimension estimation at high-level. Cloud mask obtained as a result of Markovian segmentation of infrared data. To overcome high computation complexity of Markovian segmentation parallel algorithm is developed. Fractal dimension of Meteosat data estimated using fractional Brownian motion models.
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.
Resumo:
The Data Processing Department of ISHC has developed coding forms to be used for the data to be entered into the program. The Highway Planning and Programming and the Design Departments are responsible for coding and submitting the necessary data forms to Data Processing for the noise prediction on the highway sections.
Resumo:
By providing vehicle-to-vehicle and vehicle-to-infrastructure wireless communications, vehicular ad hoc networks (VANETs), also known as the “networks on wheels”, can greatly enhance traffic safety, traffic efficiency and driving experience for intelligent transportation system (ITS). However, the unique features of VANETs, such as high mobility and uneven distribution of vehicular nodes, impose critical challenges of high efficiency and reliability for the implementation of VANETs. This dissertation is motivated by the great application potentials of VANETs in the design of efficient in-network data processing and dissemination. Considering the significance of message aggregation, data dissemination and data collection, this dissertation research targets at enhancing the traffic safety and traffic efficiency, as well as developing novel commercial applications, based on VANETs, following four aspects: 1) accurate and efficient message aggregation to detect on-road safety relevant events, 2) reliable data dissemination to reliably notify remote vehicles, 3) efficient and reliable spatial data collection from vehicular sensors, and 4) novel promising applications to exploit the commercial potentials of VANETs. Specifically, to enable cooperative detection of safety relevant events on the roads, the structure-less message aggregation (SLMA) scheme is proposed to improve communication efficiency and message accuracy. The scheme of relative position based message dissemination (RPB-MD) is proposed to reliably and efficiently disseminate messages to all intended vehicles in the zone-of-relevance in varying traffic density. Due to numerous vehicular sensor data available based on VANETs, the scheme of compressive sampling based data collection (CS-DC) is proposed to efficiently collect the spatial relevance data in a large scale, especially in the dense traffic. In addition, with novel and efficient solutions proposed for the application specific issues of data dissemination and data collection, several appealing value-added applications for VANETs are developed to exploit the commercial potentials of VANETs, namely general purpose automatic survey (GPAS), VANET-based ambient ad dissemination (VAAD) and VANET based vehicle performance monitoring and analysis (VehicleView). Thus, by improving the efficiency and reliability in in-network data processing and dissemination, including message aggregation, data dissemination and data collection, together with the development of novel promising applications, this dissertation will help push VANETs further to the stage of massive deployment.
Resumo:
Gairebé un 50% dels metges de la sanitat catalana realitzen actualment les seves tasques assistencials en entorns SAP. Donada la naturalesa de la seva feina, és fàcil afirmar que les aplicacions que utilitzen habitualment haurien d'haver estat desenvolupades centrant l'atenció en l'usuari i el context en què aquest realitza la seva feina. Però degut a diversos motius organitzatius, això no ha estat així. Aquest projecte pretén demostrar el valor afegit que pot aportar el Disseny Centrat en l'Usuari en el desenvolupament d'aplicacions assistencials en l'àmbit SAP Sanitat. S'ha realitzat una investigació prèvia de les necessitats reals dels usuaris, un anàlisi del context d'ús i dels diferents perfils de usuari, s'ha desenvolupat un prototip amb la nova tecnologia que aporta SAP per al desenvolupament de interfícies tipus web integrades en el sistema (SAP Webdynpro for ABAP), i finalment s'ha realitzat la corresponent avaluació heurística i el test de usuaris. S'ha arribat principalment a tres conclusions: en primer lloc, ha sorprès la bona disposició dels usuaris metges per a participar en aquesta mena de projectes; en segon lloc, ha quedat demostrada la importància de l'anàlisi del context, així com la rellevància que el dissenyador estigui quan més a prop millor de l'usuari final; i finalment, cal destacar que la tecnologia emprada ha limitat qualitativament diverses opcions de disseny.
Resumo:
This paper deals with the problem of managing a medical centre to have control over patients, medical records, physicians, nurses, secretaries and administrators of the application.
Resumo:
Aquest projecte es proposa dissenyar i implementar un sistema de gestió d'historials mèdics per a ser usat remotament a través d'una xarxa de comunicacions, amb un èmfasi principal enl'assoliment d'un nivell de seguretat considerat alt.
Resumo:
Donada la naturalesa de la informació que conté un expedient mèdic, és de vital importància que tant l'accés a les dades, com l'emmagatzemament d'aquestes, es porti a terme amb la major seguretat possible.
Resumo:
Desenvolupament d'un esquema criptogràfic per gestionar de forma segura els historials mèdics dels pacients através d'una xarxa de comunicacions. L'aplicació proporciona les propietatsbàsiques de seguretat que hem esmentat, a més de la propietat de no-repudique permet assegurar que una informació referent a una visita ha estat introduïda per un metge autoritzat i concret.
Resumo:
Explorar les possibilitats de la criptografía per garantir la consulta i modificació segures d'historials mèdics a través d'una xarxa de comunicacions.
Resumo:
Implementació d'un esquema criptogràfic basat en PKI (Public Key Infrastructure) per a gestionar d'una manera segura dins una xarxa de comunicacions els historials mèdics dels pacients.
Resumo:
Aquest treball presenta una solució basada en criptosistemes de clau pública, certificats i signatures digitals, emprant Java com a llenguatge de programació. Per a estendre la funcionalitat quant a seguretat del Java Developer Kit (JDK) s'utilitza la llibreria criptogràficaIAIK (Institute for Applied Information Processing and Communication).
Resumo:
L'objectiu principal d'aquest treball de final de carrera és estudiar la usabilitat en el programari en general i específicament quina relació té amb el programari mèdic.