908 resultados para Pre-processing
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.
Resumo:
This thesis introduces new processing techniques for computer-aided interpretation of ultrasound images with the purpose of supporting medical diagnostic. In terms of practical application, the goal of this work is the improvement of current prostate biopsy protocols by providing physicians with a visual map overlaid over ultrasound images marking regions potentially affected by disease. As far as analysis techniques are concerned, the main contributions of this work to the state-of-the-art is the introduction of deconvolution as a pre-processing step in the standard ultrasonic tissue characterization procedure to improve the diagnostic significance of ultrasonic features. This thesis also includes some innovations in ultrasound modeling, in particular the employment of a continuous-time autoregressive moving-average (CARMA) model for ultrasound signals, a new maximum-likelihood CARMA estimator based on exponential splines and the definition of CARMA parameters as new ultrasonic features able to capture scatterers concentration. Finally, concerning the clinical usefulness of the developed techniques, the main contribution of this research is showing, through a study based on medical ground truth, that a reduction in the number of sampled cores in standard prostate biopsy is possible, preserving the same diagnostic power of the current clinical protocol.
Resumo:
Il trauma cranico é tra le piú importanti patologie traumatiche. Ogni anno 250 pazienti ogni 100.000 abitanti vengono ricoverati in Italia per un trauma cranico. La mortalitá é di circa 17 casi per 100.000 abitanti per anno. L’Italia si trova in piena “media” Europea considerando l’incidenza media in Europa di 232 casi per 100.000 abitanti ed una mortalitá di 15 casi per 100.000 abitanti. Degli studi hanno indicato come una terapia anticoagulante é uno dei principali fattori di rischio di evolutiviá di una lesione emorragica. Al contrario della terapia anticoagulante, il rischio emorragico correlato ad una terapia antiaggregante é a tutt’oggi ancora in fase di verifica. Il problema risulta rilevante in particolare nella popolazione occidentale in quanto l’impiego degli antiaggreganti é progressivamente sempre piú diffuso. Questo per la politica di prevenzione sostenuta dalle linee guida nazionali e internazionali in termini di prevenzione del rischio cardiovascolare, in particolare nelle fasce di popolazione di etá piú avanzata. Per la prima volta, é stato dimostrato all’ospedale di Forlí[1], su una casistica sufficientemente ampia, che la terapia cronica con antiaggreganti, per la preven- zione del rischio cardiovascolare, puó rivelarsi un significativo fattore di rischio di complicanze emorragiche in un soggetto con trauma cranico, anche di grado lieve. L’ospedale per approfondire e convalidare i risultati della ricerca ha condotto, nell’anno 2009, una nuova indagine. La nuova indagine ha coinvolto oltre l’ospedale di Forlí altri trentuno centri ospedalieri italiani. Questo lavoro di ricerca vuole, insieme ai ricercatori dell’ospedale di Forlí, verificare: “se una terapia con antiaggreganti influenzi l’evolutivitá, in senso peggiorativo, di una lesione emorragica conseguente a trauma cranico lieve - moderato - severo in un soggetto adulto”, grazie ai dati raccolti dai centri ospedalieri nel 2009. Il documento é strutturato in due parti. La prima parte piú teorica, vuole fissare i concetti chiave riguardanti il contesto della ricerca e la metodologia usata per analizzare i dati. Mentre, la seconda parte piú pratica, vuole illustrare il lavoro fatto per rispondere al quesito della ricerca. La prima parte é composta da due capitoli, che sono: • Il capitolo 1: dove sono descritti i seguenti concetti: cos’é un trauma cra- nico, cos’é un farmaco di tipo anticoagulante e cos’é un farmaco di tipo antiaggregante; • Il capitolo 2: dove é descritto cos’é il Data Mining e quali tecniche sono state usate per analizzare i dati. La seconda parte é composta da quattro capitoli, che sono: • Il capitolo 3: dove sono state descritte: la struttura dei dati raccolti dai trentadue centri ospedalieri, la fase di pre-processing e trasformazione dei dati. Inoltre in questo capitolo sono descritti anche gli strumenti utilizzati per analizzare i dati; • Il capitolo 4: dove é stato descritto come é stata eseguita l’analisi esplorativa dei dati. • Il capitolo 5: dove sono descritte le analisi svolte sui dati e soprattutto i risultati che le analisi, grazie alle tecniche di Data Mining, hanno prodotto per rispondere al quesito della ricerca; • Il capitolo 6: dove sono descritte le conclusioni della ricerca. Per una maggiore comprensione del lavoro sono state aggiunte due appendici. La prima tratta del software per data mining Weka, utilizzato per effettuare le analisi. Mentre, la seconda tratta dell’implementazione dei metodi per la creazione degli alberi decisionali.
Resumo:
Argomento del presente lavoro è l’analisi di dati fMRI (functional Magnetic Resonance Imaging) nell’ambito di uno studio EEG-fMRI su pazienti affetti da malattia di Parkinson idiopatica. L’EEG-fMRI combina due diverse tecniche per lo studio in vivo dell’attività cerebrale: l'elettroencefalografia (EEG) e la risonanza magnetica funzionale. La prima registra l’attività elettrica dei neuroni corticali con ottima risoluzione temporale; la seconda misura indirettamente l’attività neuronale registrando gli effetti metabolici ad essa correlati, con buona risoluzione spaziale. L’acquisizione simultanea e la combinazione dei due tipi di dati permettono di sfruttare i vantaggi di ciascuna tecnica. Scopo dello studio è l’indagine della connettività funzionale cerebrale in condizioni di riposo in pazienti con malattia di Parkinson idiopatica ad uno stadio precoce. In particolare, l’interesse è focalizzato sulle variazioni della connettività con aree motorie primarie e supplementari in seguito alla somministrazione della terapia dopaminergica. Le quattro fasi principali dell’analisi dei dati sono la correzione del rumore fisiologico, il pre-processing usuale dei dati fMRI, l’analisi di connettività “seed-based “ e la combinazione dei dati relativi ad ogni paziente in un’analisi statistica di gruppo. Usando ’elettrocardiogramma misurato contestualmente all’EEG ed una stima dell’attività respiratoria, è stata effettuata la correzione del rumore fisiologico, ottenendo risultati consistenti con la letteratura. L’analisi di connettività fMRI ha mostrato un aumento significativo della connettività dopo la somministrazione della terapia: in particolare, si è riscontrato che le aree cerebrali maggiormente connesse alle aree motorie sono quelle coinvolte nel network sensorimotorio, nel network attentivo e nel default mode network. Questi risultati suggeriscono che la terapia dopaminergica, oltre ad avere un effetto positivo sulle performance motorie durante l’esecuzione del movimento, inizia ad agire anche in condizioni di riposo, migliorando le funzioni attentive ed esecutive, componenti integranti della fase preparatoria del movimento. Nel prossimo futuro questi risultati verranno combinati con quelli ottenuti dall’analisi dei dati EEG.
Resumo:
The determination of skeletal loading conditions in vivo and their relationship to the health of bone tissues, remain an open question. Computational modeling of the musculoskeletal system is the only practicable method providing a valuable approach to muscle and joint loading analyses, although crucial shortcomings limit the translation process of computational methods into the orthopedic and neurological practice. A growing attention focused on subject-specific modeling, particularly when pathological musculoskeletal conditions need to be studied. Nevertheless, subject-specific data cannot be always collected in the research and clinical practice, and there is a lack of efficient methods and frameworks for building models and incorporating them in simulations of motion. The overall aim of the present PhD thesis was to introduce improvements to the state-of-the-art musculoskeletal modeling for the prediction of physiological muscle and joint loads during motion. A threefold goal was articulated as follows: (i) develop state-of-the art subject-specific models and analyze skeletal load predictions; (ii) analyze the sensitivity of model predictions to relevant musculotendon model parameters and kinematic uncertainties; (iii) design an efficient software framework simplifying the effort-intensive phases of subject-specific modeling pre-processing. The first goal underlined the relevance of subject-specific musculoskeletal modeling to determine physiological skeletal loads during gait, corroborating the choice of full subject-specific modeling for the analyses of pathological conditions. The second goal characterized the sensitivity of skeletal load predictions to major musculotendon parameters and kinematic uncertainties, and robust probabilistic methods were applied for methodological and clinical purposes. The last goal created an efficient software framework for subject-specific modeling and simulation, which is practical, user friendly and effort effective. Future research development aims at the implementation of more accurate models describing lower-limb joint mechanics and musculotendon paths, and the assessment of an overall scenario of the crucial model parameters affecting the skeletal load predictions through probabilistic modeling.
An Integrated Transmission-Media Noise Calibration Software For Deep-Space Radio Science Experiments
Resumo:
The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.
Resumo:
Innerhalb des Untersuchungsgebiets Schleswig-Holstein wurden 39.712 topographische Hohlformen detektiert. Genutzt wurden dazu ESRI ArcMap 9.3 und 10.0. Der Datenaufbereitung folgten weitere Kalkulationen in MATLAB R2010b. Jedes Objekt wurde räumlich mit seinen individuellen Eigenschaften verschnitten. Dazu gehörten Fläche, Umfang, Koordinaten (Zentroide), Tiefe und maximale Tiefe der Hohlform und Formfaktoren wie Rundheit, Konvexität und Elongation. Ziel der vorgestellten Methoden war die Beantwortung von drei Fragestellungen: Sind negative Landformen dazu geeignet Landschaftseinheiten und Eisvorstöße zu unterscheiden und zu bestimmen? Existiert eine Kopplung von Depressionen an der rezenten Topographie zu geologischen Tiefenstrukturen? Können Senken unterschiedlicher Entstehung anhand ihrer Formcharakteristik unterteilt werden? Die vorgenommene Klassifikation der großen Landschaftseinheiten basiert auf der Annahme, dass sowohl Jungmoränengebiete, ihre Vorflächen als auch Altmoränengebiete durch charakteristische, abflusslose Hohlformen, wie Toteislöcher, Seen, etc. abgegrenzt werden können. Normalerweise sind solche Depressionen in der Natur eher selten, werden jedoch für ehemalige Glaziallandschaften als typisch erachtet. Ziel war es, die geologischen Haupteinheiten, Eisvorstöße und Moränengebiete der letzten Vereisungen zu differenzieren. Zur Bearbeitung wurde ein Detektionsnetz verwendet, das auf quadratischen Zellen beruht. Die Ergebnisse zeigen, dass durch die alleinige Nutzung von Depressionen zur Klassifizierung von Landschaftseinheiten Gesamtgenauigkeiten von bis zu 71,4% erreicht werden können. Das bedeutet, dass drei von vier Detektionszellen korrekt zugeordnet werden können. Jungmoränen, Altmoränen, periglazialeVorflächen und holozäne Bereiche können mit Hilfe der Hohlformen mit großer Sicherheit voneinander unterschieden und korrekt zugeordnet werden. Dies zeigt, dass für die jeweiligen Einheiten tatsächlich bestimmte Senkenformen typisch sind. Die im ersten Schritt detektierten Senken wurden räumlich mit weiterreichenden geologischen Informationen verschnitten, um zu untersuchen, inwieweit natürliche Depressionen nur glazial entstanden sind oder ob ihre Ausprägung auch mit tiefengeologischen Strukturen in Zusammenhang steht. 25.349 (63,88%) aller Senken sind kleiner als 10.000 m² und liegen in Jungmoränengebieten und können vermutlich auf glaziale und periglaziale Einflüsse zurückgeführt werden. 2.424 Depressionen liegen innerhalb der Gebiete subglazialer Rinnen. 1.529 detektierte Hohlformen liegen innerhalb von Subsidenzgebieten, von denen 1.033 innerhalb der Marschländer im Westen verortet sind. 919 große Strukturen über 1 km Größe entlang der Nordsee sind unter anderem besonders gut mit Kompaktionsbereichen elsterzeitlicher Rinnen zu homologisieren.344 dieser Hohlformen sind zudem mit Tunneltälern im Untergrund assoziiert. Diese Parallelität von Depressionen und den teils über 100 m tiefen Tunneltälern kann auf Sedimentkompaktion zurückgeführt werden. Ein Zusammenhang mit der Zersetzung postglazialen, organischen Materials ist ebenfalls denkbar. Darüber hinaus wurden in einer Distanz von 10 km um die miozän aktiven Flanken des Glückstadt-Grabens negative Landformen detektiert, die Verbindungen zu oberflächennahen Störungsstrukturen zeigen. Dies ist ein Anzeichen für Grabenaktivität während und gegen Ende der Vereisung und während des Holozäns. Viele dieser störungsbezogenen Senken sind auch mit Tunneltälern assoziiert. Entsprechend werden drei zusammenspielende Prozesse identifiziert, die mit der Entstehung der Hohlformen in Verbindung gebracht werden können. Eine mögliche Interpretation ist, dass die östliche Flanke des Glückstadt-Grabens auf die Auflast des elsterzeitlichen Eisschilds reagierte, während sich subglazial zeitgleich Entwässerungsrinnen entlang der Schwächezonen ausbildeten. Diese wurden in den Warmzeiten größtenteils durch Torf und unverfestigte Sedimente verfüllt. Die Gletschervorstöße der späten Weichselzeit aktivierten erneut die Flanken und zusätzlich wurde das Lockermaterial exariert, wodurch große Seen, wie z. B. der Große Plöner See entstanden sind. Insgesamt konnten 29 große Depressionen größer oder gleich 5 km in Schleswig-Holstein identifiziert werden, die zumindest teilweise mit Beckensubsidenz und Aktivität der Grabenflanken verbunden sind, bzw. sogar auf diese zurückgehen.Die letzte Teilstudie befasste sich mit der Differenzierung von Senken nach deren potentieller Genese sowie der Unterscheidung natürlicher von künstlichen Hohlformen. Dazu wurde ein DEM für einen Bereich im Norden Niedersachsens verwendet, das eine Gesamtgröße von 252 km² abdeckt. Die Ergebnisse zeigen, dass glazial entstandene Depressionen gute Rundheitswerte aufweisen und auch Elongation und Exzentrizität eher kompakte Formen anzeigen. Lineare negative Strukturen sind oft Flüsse oder Altarme. Sie können als holozäne Strukturen identifiziert werden. Im Gegensatz zu den potentiell natürlichen Senkenformen sind künstlich geschaffene Depressionen eher eckig oder ungleichmäßig und tendieren meist nicht zu kompakten Formen. Drei Hauptklassen topographischer Depressionen konnten identifiziert und voneinander abgegrenzt werden: Potentiell glaziale Senken (Toteisformen), Flüsse, Seiten- und Altarme sowie künstliche Senken. Die Methode der Senkenklassifikation nach Formparametern ist ein sinnvolles Instrument, um verschiedene Typen unterscheiden zu können und um bei geologischen Fragestellungen künstliche Senken bereits vor der Verarbeitung auszuschließen. Jedoch zeigte sich, dass die Ergebnisse im Wesentlichen von der Auflösung des entsprechenden Höhenmodells abhängen.
Resumo:
In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn
Resumo:
Nel presente lavoro di tesi ho sviluppato un metodo di analisi di dati di DW-MRI (Diffusion-Weighted Magnetic Resonance Imaging)cerebrale, tramite un algoritmo di trattografia, per la ricostruzione del tratto corticospinale, in un campione di 25 volontari sani. Il diffusion tensor imaging (DTI) sfrutta la capacità del tensore di diffusione D di misurare il processo di diffusione dell’acqua, per stimare quantitativamente l’anisotropia dei tessuti. In particolare, nella sostanza bianca cerebrale la diffusione delle molecole di acqua è direzionata preferenzialmente lungo le fibre, mentre è ostacolata perpendicolarmente ad esse. La trattografia utilizza le informazioni ottenute tramite il DW imaging per fornire una misura della connettività strutturale fra diverse regioni del cervello. Nel lavoro si è concentrata l’attenzione sul fascio corticospinale, che è coinvolto nella motricità volontaria, trasmettendo gli impulsi dalla corteccia motoria ai motoneuroni del midollo spinale. Il lavoro si è articolato in 3 fasi. Nella prima ho sviluppato il pre-processing di immagini DW acquisite con un gradiente di diffusione sia 25 che a 64 direzioni in ognuno dei 25 volontari sani. Si è messo a punto un metodo originale ed innovativo, basato su “Regions of Interest” (ROIs), ottenute attraverso la segmentazione automatizzata della sostanza grigia e ROIs definite manualmente su un template comune a tutti i soggetti in esame. Per ricostruire il fascio si è usato un algoritmo di trattografia probabilistica che stima la direzione più probabile delle fibre e, con un numero elevato di direzioni del gradiente, riesce ad individuare, se presente, più di una direzione dominante (seconda fibra). Nella seconda parte del lavoro, ciascun fascio è stato suddiviso in 100 segmenti (percentili). Sono stati stimati anisotropia frazionaria (FA), diffusività media, probabilità di connettività, volume del fascio e della seconda fibra con un’analisi quantitativa “along-tract”, per ottenere un confronto accurato dei rispettivi percentili dei fasci nei diversi soggetti. Nella terza parte dello studio è stato fatto il confronto dei dati ottenuti a 25 e 64 direzioni del gradiente ed il confronto del fascio fra entrambi i lati. Dall’analisi statistica dei dati inter-subject e intra-subject è emersa un’elevata variabilità tra soggetti, dimostrando l’importanza di parametrizzare il tratto. I risultati ottenuti confermano che il metodo di analisi trattografica del fascio cortico-spinale messo a punto è risultato affidabile e riproducibile. Inoltre, è risultato che un’acquisizione con 25 direzioni di DTI, meglio tollerata dal paziente per la minore durata dello scan, assicura risultati attendibili. La principale applicazione clinica riguarda patologie neurodegenerative con sintomi motori sia acquisite, quali sindromi parkinsoniane sia su base genetica o la valutazione di masse endocraniche, per la definizione del grado di contiguità del fascio. Infine, sono state poste le basi per la standardizzazione dell’analisi quantitativa di altri fasci di interesse in ambito clinico o di studi di ricerca fisiopatogenetica.
Resumo:
Numerical simulations of eye globes often rely on topographies that have been measured in vivo using devices such as the Pentacam or OCT. The topographies, which represent the form of the already stressed eye under the existing intraocular pressure, introduce approximations in the analysis. The accuracy of the simulations could be improved if either the stress state of the eye under the effect of intraocular pressure is determined, or the stress-free form of the eye estimated prior to conducting the analysis. This study reviews earlier attempts to address this problem and assesses the performance of an iterative technique proposed by Pandolfi and Holzapfel [1], which is both simple to implement and promises high accuracy in estimating the eye's stress-free form. A parametric study has been conducted and demonstrated reliance of the error level on the level of flexibility of the eye model, especially in the cornea region. However, in all cases considered 3-4 analysis iterations were sufficient to produce a stress-free form with average errors in node location <10(-6)mm and a maximal error <10(-4)mm. This error level, which is similar to what has been achieved with other methods and orders of magnitude lower than the accuracy of current clinical topography systems, justifies the use of the technique as a pre-processing step in ocular numerical simulations.
Resumo:
The main purpose of this project is to understand the process of engine simulation using the open source CFD code called KIVA. This report mainly discusses the simulation of the 4-valve Pentroof engine through KIVA 3VR2. KIVA is an open source FORTRAN code which is used to solve the fluid flow field in the engines with the transient 2D and 3D chemically reactive flow with spray. It also focuses on the complete procedure to simulate an engine cycle starting from pre- processing until the final results. This report will serve a handbook for the using the KIVA code.
Resumo:
The feasibility of carbon sequestration in cement kiln dust (CKD) was investigated in a series of batch and column experiments conducted under ambient temperature and pressure conditions. The significance of this work is the demonstration that alkaline wastes, such as CKD, are highly reactive with carbon dioxide (CO2). In the presence of water, CKD can sequester greater than 80% of its theoretical capacity for carbon without any amendments or modifications to the waste. Other mineral carbonation technologies for carbon sequestration rely on the use of mined mineral feedstocks as the source of oxides. The mining, pre-processing and reaction conditions needed to create favorable carbonation kinetics all require significant additions of energy to the system. Therefore, their actual net reduction in CO2 is uncertain. Many suitable alkaline wastes are produced at sites that also generate significant quantities of CO2. While independently, the reduction in CO2 emissions from mineral carbonation in CKD is small (~13% of process related emissions), when this technology is applied to similar wastes of other industries, the collective net reduction in emissions may be significant. The technical investigations presented in this dissertation progress from proof of feasibility through examination of the extent of sequestration in core samples taken from an aged CKD waste pile, to more fundamental batch and microscopy studies which analyze the rates and mechanisms controlling mineral carbonation reactions in a variety of fresh CKD types. Finally, the scale of the system was increased to assess the sequestration efficiency under more pilot or field-scale conditions and to clarify the importance of particle-scale processes under more dynamic (flowing gas) conditions. A comprehensive set of material characterization methods, including thermal analysis, Xray diffraction, and X-ray fluorescence, were used to confirm extents of carbonation and to better elucidate those compositional factors controlling the reactions. The results of these studies show that the rate of carbonation in CKD is controlled by the extent of carbonation. With increased degrees of conversion, particle-scale processes such as intraparticle diffusion and CaCO3 micropore precipitation patterns begin to limit the rate and possibly the extent of the reactions. Rates may also be influenced by the nature of the oxides participating in the reaction, slowing when the free or unbound oxides are consumed and reaction conditions shift towards the consumption of less reactive Ca species. While microscale processes and composition affects appear to be important at later times, the overall degrees of carbonation observed in the wastes were significant (> 80%), a majority of which occurs within the first 2 days of reaction. Under the operational conditions applied in this study, the degree of carbonation in CKD achieved in column-scale systems was comparable to those observed under ideal batch conditions. In addition, the similarity in sequestration performance among several different CKD waste types indicates that, aside from available oxide content, no compositional factors significantly hinder the ability of the waste to sequester CO2.
Resumo:
Beim Laser-Sintern wird das Pulverbett durch Heizstrahler vorgeheizt, um an der Pulveroberfläche eine Temperatur knapp unterhalb des Materialschmelzpunktes zu erzielen. Dabei soll die Temperaturverteilung auf der Oberfläche möglichst homogen sein, um gleiche Bauteileigenschaften im gesamten Bauraum zu erzielen und den Bauteilverzug gering zu halten. Erfahrungen zeigen jedoch sehr inhomogene Temperaturverteilungen, weshalb oftmals die Integration von neuen oder optimierten Prozessüberwachungssystemen in die Anlagen gefordert wird. Ein potentiell einsetzbares System sind Thermographiekameras, welche die flächige Aufnahme von Oberflächentemperaturen und somit Aussagen über die Temperaturen an der Pulverbettoberfläche erlauben. Dadurch lassen sich kalte Bereiche auf der Oberfläche identifizieren und bei der Prozessvorbereitung berücksichtigen. Gleichzeitig ermöglicht die Thermografie eine Beobachtung der Temperaturen beim Lasereingriff und somit das Ableiten von Zusammenhängen zwischen Prozessparametern und Schmelzetemperaturen. Im Rahmen der durchgeführten Untersuchungen wurde ein IR-Kamerasystem erfolgreich als Festeinbau in eine Laser-Sinteranlage integriert und Lösungen für die hierbei auftretenden Probleme erarbeitet. Anschließend wurden Untersuchungen zur Temperaturverteilung auf der Pulverbettoberfläche sowie zu den Einflussfaktoren auf deren Homogenität durchgeführt. In weiteren Untersuchungen wurden die Schmelzetemperaturen in Abhängigkeit verschiedener Prozessparameter ermittelt. Auf Basis dieser Messergebnisse wurden Aussagen über erforderliche Optimierungen getroffen und die Nutzbarkeit der Thermografie beim Laser-Sintern zur Prozessüberwachung, -regelung sowie zur Anlagenwartung als erster Zwischenstand der Untersuchungen bewertet.
Resumo:
When stereo images are captured under less than ideal conditions, there may be inconsistencies between the two images in brightness, contrast, blurring, etc. When stereo matching is performed between the images, these variations can greatly reduce the quality of the resulting depth map. In this paper we propose a method for correcting sharpness variations in stereo image pairs which is performed as a pre-processing step to stereo matching. Our method is based on scaling the 2D discrete cosine transform (DCT) coefficients of both images so that the two images have the same amount of energy in each of a set of frequency bands. Experiments show that applying the proposed correction method can greatly improve the disparity map quality when one image in a stereo pair is more blurred than the other.