986 resultados para Pos-processing software
Resumo:
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramount, with Moore’s Law being the leading factor of this trend. Today in fact an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges. As a result of the increased silicon density of modern Systems-on-a-Chip (SoC), the design space exploration needed to find the best design has exploded and hardware designers are in fact facing the problem of a huge design space. Virtual Platforms have always been used to enable hardware-software co-design, but today they are facing with the huge complexity of both hardware and software systems. In this thesis two different research works on Virtual Platforms are presented: the first one is intended for the hardware developer, to easily allow complex cycle accurate simulations of many-core SoCs. The second work exploits the parallel computing power of off-the-shelf General Purpose Graphics Processing Units (GPGPUs), with the goal of an increased simulation speed. The term Virtualization can be used in the context of many-core systems not only to refer to the aforementioned hardware emulation tools (Virtual Platforms), but also for two other main purposes: 1) to help the programmer to achieve the maximum possible performance of an application, by hiding the complexity of the underlying hardware. 2) to efficiently exploit the high parallel hardware of many-core chips in environments with multiple active Virtual Machines. This thesis is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm.
Resumo:
In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn
Resumo:
Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.
Resumo:
L’effetto dell’informatica e dell’automazione sulle procedure medico sanitarie si sta manifestando in maniera sempre più ampia, portando notevoli vantaggi per la salute e il benessere di tutti i pazienti e al tempo stesso ponendo sfide sempre più complesse alle istituzioni sanitarie e ai produttori di dispositivi medico-sanitari. Al di là dei meriti del singolo prodotto o tecnologia, esiste un fattore strategico che dovrebbe essere sempre considerato nel momento di progettare un software o un dispositivo da utilizzare in ambito sanitario (da parte dei produttori) o nel momento di pianificare l’acquisto di sistemi medicali o diagnostici complessi (da parte delle istituzioni sanitarie): l’aspetto dell’Integrazione, ovvero la capacità del sistema di inserirsi in maniera semplice, efficace e poco costosa in un Workflow (flusso di lavoro) completamente integrato. Nel primo capitolo di questo elaborato finale è stato fatto un quadro generale sull’organizzazione del flusso di lavoro in radiologia e sono stati trattati i vari standard di comunicazione e scambio dati in ambito clinico, in primis DICOM, HL7 e IHE. L’oggetto del secondo capitolo è l’organizzazione e gli obbiettivi del Dipartimento di Radiologia attuati dal Gruppo Villa Maria (GVM) e il confronto di questi ultimi con il contenuto del documento: ‘Linee guida per l’assicurazione di qualita` in teleradiologia ‘che è stata redatta dal Istituto Superiore di Sanita` destinata ad apportare, non solo ai medici radiologi ed ai TSRM ma anche agli altri professionisti della sanità coinvolti, gli elementi di informazione e di metodo per l’organizzazione della teleradiologia nel rispetto delle esigenze della deontologia medica, della sicurezza del paziente, compreso il consenso, anche per quanto riguarda la radioprotezione, la privacy e la qualità. Nel terzo capitolo sono elencati gli strumenti con cui il Dipartimento di radiologia intende perseguire i suoi obbiettivi. Il quarto capitolo tratta la mia esperienza presso una clinica esterna al GVM, in particolare la clinica Santo Stefano, di Porto Potenza Picena, dove sono stati fatti dei test volti a valutare i tempi di caricamento di esami di TAC e RMN sul cloud di GVM.
Resumo:
As lipofilling of the female breast is becoming more popular in plastic surgery, the use of MRI to assess breast volume has been employed to control postoperative results. Therefore, we sought to evaluate the accuracy of magnetic resonance imaging (MRI)-based breast volumetry software tools by comparing the measurements of silicone implant augmented breasts with the actual implant volume specified by the manufacturer. MRI-based volume analysis was performed in eight bilaterally augmented patients (46 ± 9 years) with three different software programs (Brainlab© I plan 2.6 neuronavigation software; mass analysis, version 5.3, Medis©; and OsiriX© v.3.0.2. 32-bit). The implant volumes analysed by the BrainLab© software had a mean deviation of 2.2 ± 1.7% (r?=?0.99) relative to the implanted prosthesis. OsiriX© software analysis resulted in a mean deviation of 2.8 ± 3.0% (r?=?0.99) and the Medis© software had a mean deviation of 3.1 ± 3.0% (r?=?0.99). Overall, the volumes of all analysed breast implants correlated very well with the real implant volumes. Processing time was 10 min per breast with each system and 30 s (OsiriX©) to 5 min (BrainLab© and Medis©) per silicone implant. MRI-based volumetry is a powerful tool to calculate both native breast and silicone implant volume in situ. All software solutions performed well and the measurements were close to the actual implant sizes. The use of MRI breast volumetry may be helpful in: (1) planning reconstructive and aesthetic surgery of asymmetric breasts, (2) calculating implant size in patients with missing documentation of a previously implanted device and (3) assessing post-operative results objectively.
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.
Resumo:
Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.
Resumo:
The increasing practice of offshore outsourcing software maintenance has posed the challenge of effectively transferring knowledge to individual software engineers of the vendor. In this theoretical paper, we discuss the implications of two learning theories, the model of work-based learning (MWBL) and cognitive load theory (CLT), for knowledge transfer during the transition phase. Taken together, the theories suggest that learning mechanisms need to be aligned with the type of knowledge (tacit versus explicit), task characteristics (complexity and recurrence), and the recipients’ expertise. The MWBL proposes that learning mechanisms need to include conceptual and practical activities based on the relative importance of explicit and tacit knowledge. CLT explains how effective portfolios of learning mechanisms change over time. While jobshadowing, completion tasks, and supportive information may prevail at the outset of transition, they may be replaced by the work on conventional tasks towards the end of transition.
Resumo:
We present an overview of recent developments in the tmLQCD software suite. We summarise the features of the code, including actions and operators implemented. In particular, we discuss the optimisation efforts for modern architectures using the Blue Gene/Q system as an example.
Resumo:
Oligonucleotides comprising unnatural building blocks, which interfere with the translation machinery, have gained increased attention for the treatment of gene-related diseases (e.g. antisense, RNAi). Due to structural modifications, synthetic oligonucleotides exhibit increased biostability and bioavailability upon administration. Consequently, classical enzyme-based sequencing methods are not applicable to their sequence elucidation and verification. Tandem mass spectrometry is the method of choice for performing such tasks, since gas-phase dissociation is not restricted to natural nucleic acids. However, tandem mass spectrometric analysis can generate product ion spectra of tremendous complexity, as the number of possible fragments grows rapidly with increasing sequence length. The fact that structural modifications affect the dissociation pathways greatly increases the variety of analytically valuable fragment ions. The gas-phase dissociation of oligonucleotides is characterized by the cleavage of one of the four bonds along the phosphodiester chain, by the accompanying loss of nucleases, and by the generation of internal fragments due to secondary backbone cleavage. For example, an 18-mer oligonucleotide yields a total number of 272’920 theoretical fragment ions. In contrast to the processing of peptide product ion spectra, which nowadays is highly automated, there is a lack of tools assisting the interpretation of oligonucleotide data. The existing web-based and stand-alone software applications are primarily designed for the sequence analysis of natural nucleic acids, but do not account for chemical modifications and adducts. Consequently, we developed a software to support the interpretation of mass spectrometric data of natural and modified nucleic acids and their adducts with chemotherapeutic agents.
Resumo:
XMapTools is a MATLAB©-based graphical user interface program for electron microprobe X-ray image processing, which can be used to estimate the pressure–temperature conditions of crystallization of minerals in metamorphic rocks. This program (available online at http://www.xmaptools.com) provides a method to standardize raw electron microprobe data and includes functions to calculate the oxide weight percent compositions for various minerals. A set of external functions is provided to calculate structural formulae from the standardized analyses as well as to estimate pressure–temperature conditions of crystallization, using empirical and semi-empirical thermobarometers from the literature. Two graphical user interface modules, Chem2D and Triplot3D, are used to plot mineral compositions into binary and ternary diagrams. As an example, the software is used to study a high-pressure Himalayan eclogite sample from the Stak massif in Pakistan. The high-pressure paragenesis consisting of omphacite and garnet has been retrogressed to a symplectitic assemblage of amphibole, plagioclase and clinopyroxene. Mineral compositions corresponding to ~165,000 analyses yield estimates for the eclogitic pressure–temperature retrograde path from 25 kbar to 9 kbar. Corresponding pressure–temperature maps were plotted and used to interpret the link between the equilibrium conditions of crystallization and the symplectitic microstructures. This example illustrates the usefulness of XMapTools for studying variations of the chemical composition of minerals and for retrieving information on metamorphic conditions on a microscale, towards computation of continuous pressure–temperature-and relative time path in zoned metamorphic minerals not affected by post-crystallization diffusion.
Resumo:
Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.
Resumo:
Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.
Resumo:
Venous air embolism (VAE) is an often occurring forensic finding in cases of injury to the head and neck. Whenever found, it has to be appraised in its relation to the cause of death. While visualization and quantification is difficult at traditional autopsy, Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) offer a new potential in the diagnosis of VAE. This paper reports the findings of VAE in four cases of massive head injury examined postmortem by Multislice Computed Tomography (MSCT) prior to autopsy. MSCT data of the thorax were processed using 3D air structure reconstruction software to visualize air embolism within the vascular system. Quantification of VAE was done by multiplying air containing areas on axial 2D images by their reconstruction intervals and then by summarizing the air volumes. Excellent 3D visualization of the air within the vascular system was obtained in all cases, and the intravascular gas volume was quantified.