982 resultados para Computer techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces new processing techniques for computer-aided interpretation of ultrasound images with the purpose of supporting medical diagnostic. In terms of practical application, the goal of this work is the improvement of current prostate biopsy protocols by providing physicians with a visual map overlaid over ultrasound images marking regions potentially affected by disease. As far as analysis techniques are concerned, the main contributions of this work to the state-of-the-art is the introduction of deconvolution as a pre-processing step in the standard ultrasonic tissue characterization procedure to improve the diagnostic significance of ultrasonic features. This thesis also includes some innovations in ultrasound modeling, in particular the employment of a continuous-time autoregressive moving-average (CARMA) model for ultrasound signals, a new maximum-likelihood CARMA estimator based on exponential splines and the definition of CARMA parameters as new ultrasonic features able to capture scatterers concentration. Finally, concerning the clinical usefulness of the developed techniques, the main contribution of this research is showing, through a study based on medical ground truth, that a reduction in the number of sampled cores in standard prostate biopsy is possible, preserving the same diagnostic power of the current clinical protocol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of Tissue Engineering is to develop biological substitutes that will restore lost morphological and functional features of diseased or damaged portions of organs. Recently computer-aided technology has received considerable attention in the area of tissue engineering and the advance of additive manufacture (AM) techniques has significantly improved control over the pore network architecture of tissue engineering scaffolds. To regenerate tissues more efficiently, an ideal scaffold should have appropriate porosity and pore structure. More sophisticated porous configurations with higher architectures of the pore network and scaffolding structures that mimic the intricate architecture and complexity of native organs and tissues are then required. This study adopts a macro-structural shape design approach to the production of open porous materials (Titanium foams), which utilizes spatial periodicity as a simple way to generate the models. From among various pore architectures which have been studied, this work simulated pore structure by triply-periodic minimal surfaces (TPMS) for the construction of tissue engineering scaffolds. TPMS are shown to be a versatile source of biomorphic scaffold design. A set of tissue scaffolds using the TPMS-based unit cell libraries was designed. TPMS-based Titanium foams were meant to be printed three dimensional with the relative predicted geometry, microstructure and consequently mechanical properties. Trough a finite element analysis (FEA) the mechanical properties of the designed scaffolds were determined in compression and analyzed in terms of their porosity and assemblies of unit cells. The purpose of this work was to investigate the mechanical performance of TPMS models trying to understand the best compromise between mechanical and geometrical requirements of the scaffolds. The intention was to predict the structural modulus in open porous materials via structural design of interconnected three-dimensional lattices, hence optimising geometrical properties. With the aid of FEA results, it is expected that the effective mechanical properties for the TPMS-based scaffold units can be used to design optimized scaffolds for tissue engineering applications. Regardless of the influence of fabrication method, it is desirable to calculate scaffold properties so that the effect of these properties on tissue regeneration may be better understood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die vorliegende Doktorarbeit befasst sich mit klassischen Vektor-Spingläsern eine Art von ungeordneten Magneten - auf verschiedenen Gittertypen. Da siernbedeutsam für eine experimentelle Realisierung sind, ist ein theoretisches Verständnis von Spinglas-Modellen mit wenigen Spinkomponenten und niedriger Gitterdimension von großer Bedeutung. Da sich dies jedoch als sehr schwierigrnerweist, sind neue, aussichtsreiche Ansätze nötig. Diese Arbeit betrachtet daher den Limesrnunendlich vieler Spindimensionen. Darin entstehen mehrere Vereinfachungen im Vergleichrnzu Modellen niedriger Spindimension, so dass für dieses bedeutsame Problem Eigenschaften sowohl bei Temperatur Null als auch bei endlichen Temperaturenrnüberwiegend mit numerischen Methoden ermittelt werden. Sowohl hyperkubische Gitter als auch ein vielseitiges 1d-Modell werden betrachtet. Letzteres erlaubt es, unterschiedliche Universalitätsklassen durch bloßes Abstimmen eines einzigen Parameters zu untersuchen. "Finite-size scaling''-Formen, kritische Exponenten, Quotienten kritischer Exponenten und andere kritische Größen werden nahegelegt und mit numerischen Ergebnissen verglichen. Eine detaillierte Beschreibung der Herleitungen aller numerisch ausgewerteter Gleichungen wird ebenso angegeben. Bei Temperatur Null wird eine gründliche Untersuchung der Grundzustände und Defektenergien gemacht. Eine Reihe interessanter Größen wird analysiert und insbesondere die untere kritische Dimension bestimmt. Bei endlicher Temperatur sind der Ordnungsparameter und die Spinglas-Suszeptibilität über die numerisch berechnete Korrelationsmatrix zugänglich. Das Spinglas-Modell im Limes unendlich vieler Spinkomponenten kann man als Ausgangspunkt zur Untersuchung der natürlicheren Modelle mit niedriger Spindimension betrachten. Wünschenswert wäre natürlich ein Modell, das die Vorteile des ersten mit den Eigenschaften des zweiten verbände. Daher wird in Modell mit Anisotropie vorgeschlagen und getestet, mit welchem versucht wird, dieses Ziel zu erreichen. Es wird auf reizvolle Wege hingewiesen, das Modell zu nutzen und eine tiefergehende Beschäftigung anzuregen. Zuletzt werden sogenannte "real-space" Renormierungsgruppenrechnungen sowohl analytisch als auch numerisch für endlich-dimensionale Vektor-Spingläser mit endlicher Anzahl von Spinkomponenten durchgeführt. Dies wird mit einer zuvor bestimmten neuen Migdal-Kadanoff Rekursionsrelation geschehen. Neben anderen Größen wird die untere kritische Dimension bestimmt.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An algorithm for the real-time registration of a retinal video sequence captured with a scanning digital ophthalmoscope (SDO) to a retinal composite image is presented. This method is designed for a computer-assisted retinal laser photocoagulation system to compensate for retinal motion and hence enhance the accuracy, speed, and patient safety of retinal laser treatments. The procedure combines intensity and feature-based registration techniques. For the registration of an individual frame, the translational frame-to-frame motion between preceding and current frame is detected by normalized cross correlation. Next, vessel points on the current video frame are identified and an initial transformation estimate is constructed from the calculated translation vector and the quadratic registration matrix of the previous frame. The vessel points are then iteratively matched to the segmented vessel centerline of the composite image to refine the initial transformation and register the video frame to the composite image. Criteria for image quality and algorithm convergence are introduced, which assess the exclusion of single frames from the registration process and enable a loss of tracking signal if necessary. The algorithm was successfully applied to ten different video sequences recorded from patients. It revealed an average accuracy of 2.47 ± 2.0 pixels (∼23.2 ± 18.8 μm) for 2764 evaluated video frames and demonstrated that it meets the clinical requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system’s position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device’s projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultrasound-guided techniques are increasingly used in anaesthetic practice to identify tissues beneath the skin and to increase the accuracy of placement of needles close to targeted structures. To examine ultrasound's usefulness for dilatational tracheostomy, we performed ultrasound-guided tracheal punctures in human cadavers followed by computer-tomographic (CT) control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oncological liver surgery and interventions aim for removal of tumor tissue while preserving a sufficient amount of functional tissue to ensure organ regeneration. This requires detailed understanding of the patient-specific internal organ anatomy (blood vessel system, bile ducts, tumor location). The introduction of computer support in the surgical process enhances anatomical orientation through patient-specific 3D visualization and enables precise reproduction of planned surgical strategies though stereotactic navigation technology. This article provides clinical background information on indications and techniques for the treatment of liver tumors, reviews the technological contributions addressing the problem of organ motion during navigated surgery on a deforming organ, and finally presents an overview of the clinical experience in computer-assisted liver surgery and interventions. The review concludes that several clinically applicable solutions for computer aided liver surgery are available and small-scale clinical trials have been performed. Further developments will be required more accurate and faster handling of organ deformation and large clinical studies will be required for demonstrating the benefits of computer aided liver surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simulation Automation Framework for Experiments (SAFE) streamlines the de- sign and execution of experiments with the ns-3 network simulator. SAFE ensures that best practices are followed throughout the workflow a network simulation study, guaranteeing that results are both credible and reproducible by third parties. Data analysis is a crucial part of this workflow, where mistakes are often made. Even when appearing in highly regarded venues, scientific graphics in numerous network simulation publications fail to include graphic titles, units, legends, and confidence intervals. After studying the literature in network simulation methodology and in- formation graphics visualization, I developed a visualization component for SAFE to help users avoid these errors in their scientific workflow. The functionality of this new component includes support for interactive visualization through a web-based interface and for the generation of high-quality, static plots that can be included in publications. The overarching goal of my contribution is to help users create graphics that follow best practices in visualization and thereby succeed in conveying the right information about simulation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal proteins are able to adapt their response to a change in the environment, governing in this way a broad variety of important cellular processes in living systems. While conventional molecular-dynamics (MD) techniques can be used to explore the early signaling pathway of these protein systems at atomistic resolution, the high computational costs limit their usefulness for the elucidation of the multiscale transduction dynamics of most signaling processes, occurring on experimental timescales. To cope with the problem, we present in this paper a novel multiscale-modeling method, based on a combination of the kinetic Monte-Carlo- and MD-technique, and demonstrate its suitability for investigating the signaling behavior of the photoswitch light-oxygen-voltage-2-Jα domain from Avena Sativa (AsLOV2-Jα) and an AsLOV2-Jα-regulated photoactivable Rac1-GTPase (PA-Rac1), recently employed to control the motility of cancer cells through light stimulus. More specifically, we show that their signaling pathways begin with a residual re-arrangement and subsequent H-bond formation of amino acids near to the flavin-mononucleotide chromophore, causing a coupling between β-strands and subsequent detachment of a peripheral α-helix from the AsLOV2-domain. In the case of the PA-Rac1 system we find that this latter process induces the release of the AsLOV2-inhibitor from the switchII-activation site of the GTPase, enabling signal activation through effector-protein binding. These applications demonstrate that our approach reliably reproduces the signaling pathways of complex signal proteins, ranging from nanoseconds up to seconds at affordable computational costs.