912 resultados para Matching patient to digital phantoms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long term recording of biomedical signals such as ECG, EMG, respiration and other information (e.g. body motion) can improve diagnosis and potentially monitor the evolution of many widespread diseases. However, long term monitoring requires specific solutions, portable and wearable equipment that should be particularly comfortable for patients. The key-issues of portable biomedical instrumentation are: power consumption, long-term sensor stability, comfortable wearing and wireless connectivity. In this scenario, it would be valuable to realize prototypes using available technologies to assess long-term personal monitoring and foster new ways to provide healthcare services. The aim of this work is to discuss the advantages and the drawbacks in long term monitoring of biopotentials and body movements using textile electrodes embedded in clothes. The textile electrodes were embedded into garments; tiny shirt and short were used to acquire electrocardiographic and electromyographic signals. The garment was equipped with low power electronics for signal acquisition and data wireless transmission via Bluetooth. A small, battery powered, biopotential amplifier and three-axes acceleration body monitor was realized. Patient monitor incorporates a microcontroller, analog-to-digital signal conversion at programmable sampling frequencies. The system was able to acquire and to transmit real-time signals, within 10 m range, to any Bluetooth device (including PDA or cellular phone). The electronics were embedded in the shirt resulting comfortable to wear for patients. Small size MEMS 3-axes accelerometers were also integrated. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The key to prosperity in today's world is access to digital content and skills to create new content. Investigations of folklore artifacts is the topic of this article, presenting research related to the national program „Knowledge Technologies for Creation of Digital Presentation and Significant Repositories of Folklore Heritage” (FolkKnow). FolkKnow aims to build a digital multimedia archive "Bulgarian Folklore Heritage” (BFH) and virtual information portal with folk media library of digitized multimedia objects from a selected collection of the fund of Institute of Ethnology and Folklore Studies with Ethnographic Museum (IEFSEM) of the Bulgarian Academy of Science (BAS). The realization of the project FolkKnow gives opportunity for wide social applications of the multimedia collections, for the purposes of Interactive distance learning/self-learning, research activities in the field of Bulgarian traditional culture and for the cultural and ethno-tourism. We study, analyze and implement techniques and methods for digitization of multimedia objects and their annotation. In the paper are discussed specifics approaches used to building and protect a digital archive with multimedia content. Tasks can be systematized in the following guidelines: * Digitization of the selected samples * Analysis of the objects in order to determine the metadata of selected artifacts from selected collections and problem areas * Digital multimedia archive * Socially-oriented applications and virtual exhibitions artery * Frequency dictionary tool for texts with folklore themes * A method of modern technologies of protecting intellectual property and copyrights on digital content developed for use in digital exposures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phase noise enhancement due to digital dispersion equalization is investigated, which indicates that the phase noise from transmitter laser can also interact with the dispersion depending on the choice of digital dispersion compensation methods. © OSA 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

© 2014, Canadian Anesthesiologists' Society.Optimal perioperative fluid management is an important component of Enhanced Recovery After Surgery (ERAS) pathways. Fluid management within ERAS should be viewed as a continuum through the preoperative, intraoperative, and postoperative phases. Each phase is important for improving patient outcomes, and suboptimal care in one phase can undermine best practice within the rest of the ERAS pathway. The goal of preoperative fluid management is for the patient to arrive in the operating room in a hydrated and euvolemic state. To achieve this, prolonged fasting is not recommended, and routine mechanical bowel preparation should be avoided. Patients should be encouraged to ingest a clear carbohydrate drink two to three hours before surgery. The goals of intraoperative fluid management are to maintain central euvolemia and to avoid excess salt and water. To achieve this, patients undergoing surgery within an enhanced recovery protocol should have an individualized fluid management plan. As part of this plan, excess crystalloid should be avoided in all patients. For low-risk patients undergoing low-risk surgery, a “zero-balance” approach might be sufficient. In addition, for most patients undergoing major surgery, individualized goal-directed fluid therapy (GDFT) is recommended. Ultimately, however, the additional benefit of GDFT should be determined based on surgical and patient risk factors. Postoperatively, once fluid intake is established, intravenous fluid administration can be discontinued and restarted only if clinically indicated. In the absence of other concerns, detrimental postoperative fluid overload is not justified and “permissive oliguria” could be tolerated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The selected publications are focused on the relations between users, eGames and the educational context, and how they interact together, so that both learning and user performance are improved through feedback provision. A key part of this analysis is the identification of behavioural, anthropological patterns, so that users can be clustered based on their actions, and the steps taken in the system (e.g. social network, online community, or virtual campus). In doing so, we can analyse large data sets of information made by a broad user sample,which will provide more accurate statistical reports and readings. Furthermore, this research is focused on how users can be clustered based on individual and group behaviour, so that a personalized support through feedback is provided, and the personal learning process is improved as well as the group interaction. We take inputs from every person and from the group they belong to, cluster the contributions, find behavioural patterns and provide personalized feedback to the individual and the group, based on personal and group findings. And we do all this in the context of educational games integrated in learning communities and learning management systems. To carry out this research we design a set of research questions along the 10-year published work presented in this thesis. We ask if the users can be clustered together based on the inputs provided by them and their groups; if and how these data are useful to improve the learner performance and the group interaction; if and how feedback becomes a useful tool for such pedagogical goal; if and how eGames become a powerful context to deploy the pedagogical methodology and the various research methods and activities that make use of that feedback to encourage learning and interaction; if and how a game design and a learning design must be defined and implemented to achieve these objectives, and to facilitate the productive authoring and integration of eGames in pedagogical contexts and frameworks. We conclude that educational games are a resourceful tool to provide a user experience towards a better personalized learning performance and an enhance group interaction along the way. To do so, eGames, while integrated in an educational context, must follow a specific set of user and technical requirements, so that the playful context supports the pedagogical model underneath. We also conclude that, while playing, users can be clustered based on their personal behaviour and interaction with others, thanks to the pattern identification. Based on this information, a set of recommendations are provided Digital Anthropology and educational eGames 6 /216 to the user and the group in the form of personalized feedback, timely managed for an optimum impact on learning performance and group interaction level. In this research, Digital Anthropology is introduced as a concept at a late stage to provide a backbone across various academic fields including: Social Science, Cognitive Science, Behavioural Science, Educational games and, of course, Technology-enhance learning. Although just recently described as an evolution of traditional anthropology, this approach to digital behaviour and social structure facilitates the understanding amongst fields and a comprehensive view towards a combined approach. This research takes forward the already existing work and published research onusers and eGames for learning, and turns the focus onto the next step — the clustering of users based on their behaviour and offering proper, personalized feedback to the user based on that clustering, rather than just on isolated inputs from every user. Indeed, this pattern recognition in the described context of eGames in educational contexts, and towards the presented aim of personalized counselling to the user and the group through feedback, is something that has not been accomplished before.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La llegada de ingresos provenientes de la distribución digital de la prensa continúa siendo reducida. A pesar de su volumen de lectores, el aumento de posibilidades dentro de los formatos publicitarios, la capacidad de medición del retorno de la inversión que ofrece el medio digital y la aparente unificación en la medición de audiencia que desde 2012 está presente en el mercado. De esta manera, los diferentes medios continúan sin conseguir rentabilizar la migración digital que cada día conlleva continuos gastos sin ofrecer beneficios. La audiencia como moneda de cambio en la comercialización de espacios publicitarios ha perdido todo su valor, pese al retorno de la inversión que ofrece. Este trabajo describe esta problemática y los cambios realizados antes y después del «fin del debate de la medición de audiencias» con el objetivo de poner en valor la venta de los espacios publicitarios digitales. En concreto se analiza el caso de los diarios digitales por su elevada lectura y sus escasos ingresos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les maladies cardiovasculaires sont la première cause de mortalité dans le monde et les anévrismes de l’aorte abdominale (AAAs) font partie de ce lot déplorable. Un anévrisme est la dilatation d’une artère pouvant conduire à la mort. Une rupture d’AAA s’avère fatale près de 80% du temps. Un moyen de traiter les AAAs est l’insertion d’une endoprothèse (SG) dans l’aorte, communément appelée la réparation endovasculaire (EVAR), afin de réduire la pression exercée par le flux sanguin sur la paroi. L’efficacité de ce traitement est compromise par la survenue d’endofuites (flux sanguins entre la prothèse et le sac anévrismal) pouvant conduire à la rupture de l’anévrisme. Ces flux sanguins peuvent survenir à n’importe quel moment après le traitement EVAR. Une surveillance par tomodensitométrie (CT-scan) annuelle est donc requise, augmentant ainsi le coût du suivi post-EVAR et exposant le patient à la radiation ionisante et aux complications des contrastes iodés. L’endotension est le concept de dilatation de l’anévrisme sans la présence d’une endofuite apparente au CT-scan. Après le traitement EVAR, le sang dans le sac anévrismal coagule pour former un thrombus frais, qui deviendra progressivement un thrombus plus fibreux et plus organisé, donnant lieu à un rétrécissement de l’anévrisme. Il y a très peu de données dans la littérature pour étudier ce processus temporel et la relation entre le thrombus frais et l’endotension. L’étalon d’or du suivi post-EVAR, le CT-scan, ne peut pas détecter la présence de thrombus frais. Il y a donc un besoin d’investir dans une technique sécuritaire et moins coûteuse pour le suivi d’AAAs après EVAR. Une méthode récente, l’élastographie dynamique, mesure l’élasticité des tissus en temps réel. Le principe de cette technique repose sur la génération d’ondes de cisaillement et l’étude de leur propagation afin de remonter aux propriétés mécaniques du milieu étudié. Cette thèse vise l’application de l’élastographie dynamique pour la détection des endofuites ainsi que de la caractérisation mécanique des tissus du sac anévrismal après le traitement EVAR. Ce projet dévoile le potentiel de l’élastographie afin de réduire les dangers de la radiation, de l’utilisation d’agent de contraste ainsi que des coûts du post-EVAR des AAAs. L’élastographie dynamique utilisant le « Shear Wave Imaging » (SWI) est prometteuse. Cette modalité pourrait complémenter l’échographie-Doppler (DUS) déjà utilisée pour le suivi d’examen post-EVAR. Le SWI a le potentiel de fournir des informations sur l’organisation fibreuse du thrombus ainsi que sur la détection d’endofuites. Tout d’abord, le premier objectif de cette thèse consistait à tester le SWI sur des AAAs dans des modèles canins pour la détection d’endofuites et la caractérisation du thrombus. Des SGs furent implantées dans un groupe de 18 chiens avec un anévrisme créé au moyen de la veine jugulaire. 4 anévrismes avaient une endofuite de type I, 13 avaient une endofuite de type II et un anévrisme n’avait pas d’endofuite. Des examens échographiques, DUS et SWI ont été réalisés à l’implantation, puis 1 semaine, 1 mois, 3 mois et 6 mois après le traitement EVAR. Une angiographie, un CT-scan et des coupes macroscopiques ont été produits au sacrifice. Les régions d’endofuites, de thrombus frais et de thrombus organisé furent identifiées et segmentées. Les valeurs de rigidité données par le SWI des différentes régions furent comparées. Celles-ci furent différentes de façon significative (P < 0.001). Également, le SWI a pu détecter la présence d’endofuites où le CT-scan (1) et le DUS (3) ont échoué. Dans la continuité de ces travaux, le deuxième objectif de ce projet fut de caractériser l’évolution du thrombus dans le temps, de même que l’évolution des endofuites après embolisation dans des modèles canins. Dix-huit anévrismes furent créés dans les artères iliaques de neuf modèles canins, suivis d’une endofuite de type I après EVAR. Deux gels embolisants (Chitosan (Chi) ou Chitosan-Sodium-Tetradecyl-Sulfate (Chi-STS)) furent injectés dans le sac anévrismal pour promouvoir la guérison. Des examens échographiques, DUS et SWI ont été effectués à l’implantation et après 1 semaine, 1 mois, 3 mois et 6 mois. Une angiographie, un CT-scan et un examen histologique ont été réalisés au sacrifice afin d’évaluer la présence, le type et la grosseur de l’endofuite. Les valeurs du module d’élasticité des régions d’intérêts ont été identifiées et segmentées sur les données pathologiques. Les régions d’endofuites et de thrombus frais furent différentes de façon significative comparativement aux autres régions (P < 0.001). Les valeurs d’élasticité du thrombus frais à 1 semaine et à 3 mois indiquent que le SWI peut évaluer la maturation du thrombus, de même que caractériser l’évolution et la dégradation des gels embolisants dans le temps. Le SWI a pu détecter des endofuites où le DUS a échoué (2) et, contrairement au CT-scan, détecter la présence de thrombus frais. Finalement, la dernière étape du projet doctoral consistait à appliquer le SWI dans une phase clinique, avec des patients humains ayant déjà un AAA, pour la détection d’endofuite et la caractérisation de l’élasticité des tissus. 25 patients furent sélectionnés pour participer à l’étude. Une comparaison d’imagerie a été produite entre le SWI, le CT-scan et le DUS. Les valeurs de rigidité données par le SWI des différentes régions (endofuite, thrombus) furent identifiées et segmentées. Celles-ci étaient distinctes de façon significative (P < 0.001). Le SWI a détecté 5 endofuites sur 6 (sensibilité de 83.3%) et a eu 6 faux positifs (spécificité de 76%). Le SWI a pu détecter la présence d’endofuites où le CT-scan (2) ainsi que le DUS (2) ont échoué. Il n’y avait pas de différence statistique notable entre la rigidité du thrombus pour un AAA avec endofuite et un AAA sans endofuite. Aucune corrélation n’a pu être établie de façon significative entre les diamètres des AAAs ainsi que leurs variations et l’élasticité du thrombus. Le SWI a le potentiel de détecter les endofuites et caractériser le thrombus selon leurs propriétés mécaniques. Cette technique pourrait être combinée au suivi des AAAs post-EVAR, complémentant ainsi l’imagerie DUS et réduisant le coût et l’exposition à la radiation ionisante et aux agents de contrastes néphrotoxiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: It is estimated that 1 in 5 will, at some point in their lives, experience a long-term illness or disability that will impact their day to day lives. Access to digital information and technologies can be life changing and a necessity to fully participate in education, work and society. Specialist assistive technologies, such as screen readers, have been available for many years and are now built-into operating systems and devices. In addition, web accessibility standards have been compiled and published since the advent of the World Wide Web over two decades ago. However, internet use by people with disabilities continues to lag significantly behind those with no disability and use of assistive technologies remains lower than should be the case with tools often abandoned. In this seminar we will talk about our work to identify digital accessibility challenges; the barriers experienced by those with disabilities and how computer scientists can play a part in removing obstacles to access and ease of use. We will discuss some of our projects focussing on: • Development of assistive technologies for niche groups of users, • improving accessibility standards to cover a wider range of disabilities, • creating accessibility training resources for developers and stakeholders • embedding accessibility practice within development projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article discusses the contribution of critical political economy approaches to digital journalism studies and argues that these offer important correctives to celebratory perspectives. The first part offers a review and critique of influential claims arising from self-styled new studies of convergence culture, media and creative industries. The second part discusses the contribution of critical political economy in examining digital journalism and responding to celebrant claims. The final part reflects on problems of restrictive normativity and other limitations within media political economy perspectives and considers ways in which challenges might be addressed by more synthesising approaches. The paper proposes developing radical pluralist, media systems and comparative analysis, and advocates drawing on strengths in both political economy and culturalist traditions to map and evaluate practices across all sectors of digital journalism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.