14 resultados para Particle Level Set

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

80.00% 80.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This PhD thesis presents two measurements of differential production cross section of top and anti-top pairs tt ̅ decaying in a lepton+jets final state. The normalize cross section is measured as a function of the top transverse momentum and the tt ̅ mass, transverse momentum and rapidity using the full 2011 proton-proton (pp) ATLAS data taking at a center of mass energy of √s=7 TeV and corresponding to an integrated luminosity of L=4.6 〖fb〗^(-1). The cross section is also measured at the particle level as a function of the hadronic top transverse momentum for highly energetic events using the full 2012 data taking at √s=8 TeV and with L=20 〖fb〗^(-1). The measured spectra are fully corrected for detector efficiency and resolution effects and are compared to several theoretical predictions showing a quite good agreement, depending on different spectra.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last decade advances in the field of sensor design and improved base materials have pushed the radiation hardness of the current silicon detector technology to impressive performance. It should allow operation of the tracking systems of the Large Hadron Collider (LHC) experiments at nominal luminosity (1034 cm-2s-1) for about 10 years. The current silicon detectors are unable to cope with such an environment. Silicon carbide (SiC), which has recently been recognized as potentially radiation hard, is now studied. In this work it was analyzed the effect of high energy neutron irradiation on 4H-SiC particle detectors. Schottky and junction particle detectors were irradiated with 1 MeV neutrons up to fluence of 1016 cm-2. It is well known that the degradation of the detectors with irradiation, independently of the structure used for their realization, is caused by lattice defects, like creation of point-like defect, dopant deactivation and dead layer formation and that a crucial aspect for the understanding of the defect kinetics at a microscopic level is the correct identification of the crystal defects in terms of their electrical activity. In order to clarify the defect kinetic it were carried out a thermal transient spectroscopy (DLTS and PICTS) analysis of different samples irradiated at increasing fluences. The defect evolution was correlated with the transport properties of the irradiated detector, always comparing with the un-irradiated one. The charge collection efficiency degradation of Schottky detectors induced by neutron irradiation was related to the increasing concentration of defects as function of the neutron fluence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis was to study the effects of extremely low frequency (ELF) electromagnetic magnetic fields on potassium currents in neural cell lines ( Neuroblastoma SK-N-BE ), using the whole-cell Patch Clamp technique. Such technique is a sophisticated tool capable to investigate the electrophysiological activity at a single cell, and even at single channel level. The total potassium ion currents through the cell membrane was measured while exposing the cells to a combination of static (DC) and alternate (AC) magnetic fields according to the prediction of the so-called ‘ Ion Resonance Hypothesis ’. For this purpose we have designed and fabricated a magnetic field exposure system reaching a good compromise between magnetic field homogeneity and accessibility to the biological sample under the microscope. The magnetic field exposure system consists of three large orthogonal pairs of square coils surrounding the patch clamp set up and connected to the signal generation unit, able to generate different combinations of static and/or alternate magnetic fields. Such system was characterized in term of field distribution and uniformity through computation and direct field measurements. No statistically significant changes in the potassium ion currents through cell membrane were reveled when the cells were exposed to AC/DC magnetic field combination according to the afore mentioned ‘Ion Resonance Hypothesis’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work has been developed in the framework of a new experimental campaign, proposed by the NUCL-EX Collaboration (INFN III Group), in order to progress in the understanding of the statistical properties of light nuclei, at excitation energies above particle emission threshold, by measuring exclusive data from fusion-evaporation reactions. The determination of the nuclear level density in the A~20 region, the understanding of the statistical behavior of light nuclei with excitation energies ~3 A.MeV, and the measurement of observables linked to the presence of cluster structures of nuclear excited levels are the main physics goals of this work. On the theory side, the contribution to this project given by this work lies in the development of a dedicated Monte-Carlo Hauser-Feshbach code for the evaporation of the compound nucleus. The experimental part of this thesis has consisted in the participation to the measurement 12C+12C at 95 MeV beam energy, at Laboratori Nazionali di Legnaro - INFN, using the GARFIELD+Ring Counter(RCo) set-up, from the beam-time request to the data taking, data reduction, detector calibrations and data analysis. Different results of the data analysis are presented in this thesis, together with a theoretical study of the system, performed with the new statistical decay code. As a result of this work, constraints on the nuclear level density at high excitation energy for light systems ranging from C up to Mg are given. Moreover, pre-equilibrium effects, tentatively interpreted as alpha-clustering effects, are put in evidence, both in the entrance channel of the reaction and in the dissipative dynamics on the path towards thermalisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first chapter, we consider the joint estimation of objective and risk-neutral parameters for SV option pricing models. We propose a strategy which exploits the information contained in large heterogeneous panels of options, and we apply it to S&P 500 index and index call options data. Our approach breaks the stochastic singularity between contemporaneous option prices by assuming that every observation is affected by measurement error. We evaluate the likelihood function by using a MC-IS strategy combined with a Particle Filter algorithm. The second chapter examines the impact of different categories of traders on market transactions. We estimate a model which takes into account traders’ identities at the transaction level, and we find that the stock prices follow the direction of institutional trading. These results are carried out with data from an anonymous market. To explain our estimates, we examine the informativeness of a wide set of market variables and we find that most of them are unambiguously significant to infer the identity of traders. The third chapter investigates the relationship between the categories of market traders and three definitions of financial durations. We consider trade, price and volume durations, and we adopt a Log-ACD model where we include information on traders at the transaction level. As to trade durations, we observe an increase of the trading frequency when informed traders and the liquidity provider intensify their presence in the market. For price and volume durations, we find the same effect to depend on the state of the market activity. The fourth chapter proposes a strategy to express order aggressiveness in quantitative terms. We consider a simultaneous equation model to examine price and volume aggressiveness at Euronext Paris, and we analyse the impact of a wide set of order book variables on the price-quantity decision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to much evidence, observing objects activates two types of information: structural properties, i.e., the visual information about the structural features of objects, and function knowledge, i.e., the conceptual information about their skilful use. Many studies so far have focused on the role played by these two kinds of information during object recognition and on their neural underpinnings. However, to the best of our knowledge no study so far has focused on the different activation of this information (structural vs. function) during object manipulation and conceptualization, depending on the age of participants and on the level of object familiarity (familiar vs. non-familiar). Therefore, the main aim of this dissertation was to investigate how actions and concepts related to familiar and non-familiar objects may vary across development. To pursue this aim, four studies were carried out. A first study led to the creation of the Familiar and Non-Familiar Stimuli Database, a set of everyday objects classified by Italian pre-schoolers, schoolers, and adults, useful to verify how object knowledge is modulated by age and frequency of use. A parallel study demonstrated that factors such as sociocultural dynamics may affect the perception of objects. Specifically, data for familiarity, naming, function, using and frequency of use of the objects used to create the Familiar And Non-Familiar Stimuli Database were collected with Dutch and Croatian children and adults. The last two studies on object interaction and language provide further evidence in support of the literature on affordances and on the link between affordances and the cognitive process of language from a developmental point of view, supporting the perspective of a situated cognition and emphasizing the crucial role of human experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse radio emission in galaxy clusters has been observed with different size and properties. Giant radio halos (RH), Mpc-size sources found in merging clusters, and mini halos (MH), 0.1-0.5 Mpc size sources located in relaxed cool-core clusters, are thought to be distinct classes of objects with different formation mechanisms. However, recent observations have revealed the unexpected presence of diffuse emission on Mpc-scales in relaxed clusters that host a central MH and show no signs of major mergers. The study of these sources is still at the beginning and it is not yet clear what could be the origin of their unusual emission. The main goal of this thesis is to test the occurrence of these peculiar sources and investigate their properties using low frequency radio observations. This thesis consists in the study of a sample of 12 cool-core galaxy clusters which present some level of dynamical disturbances on large-scale. The heterogeneity of sources in the sample allowed me to investigate under which conditions a halo-type emission is present in MH clusters; and also to study the connection between AGN bubbles and the local environment. Using high sensitivity LOFAR observations, I have detected large-scale emission in four non-merging clusters, in addition to the central MH. I have constrained for the first time the spectral properties of diffuse emission in these double radio component galaxy clusters, and I have investigated the connection between their thermal and non-thermal emission for a better comprehension of the acceleration mechanism. Furthermore, I derived upper limits to the halo power for the other clusters in the sample, which could present large-scale diffuse emission under the detection threshold. Finally, I have reconstructed the duty-cycle of one of the most powerful AGN known, located at the centre of a galaxy cluster of the sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In high-energy hadron collisions, the production at parton level of heavy-flavour quarks (charm and bottom) is described by perturbative Quantum Chromo-dynamics (pQCD) calculations, given the hard scale set by the quark masses. However, in hadron-hadron collisions, the predictions of the heavy-flavour hadrons eventually produced entail the knowledge of the parton distribution functions, as well as an accurate description of the hadronisation process. The latter is taken into account via the fragmentation functions measured at e$^+$e$^-$ colliders or in ep collisions, but several observations in LHC Run 1 and Run 2 data challenged this picture. In this dissertation, I studied the charm hadronisation in proton-proton collision at $\sqrt{s}$ = 13 TeV with the ALICE experiment at the LHC, making use of a large statistic data sample collected during LHC Run 2. The production of heavy-flavour in this collision system will be discussed, also describing various hadronisation models implemented in commonly used event generators, which try to reproduce experimental data, taking into account the unexpected results at LHC regarding the enhanced production of charmed baryons. The role of multiple parton interaction (MPI) will also be presented and how it affects the total charm production as a function of multiplicity. The ALICE apparatus will be described before moving to the experimental results, which are related to the measurement of relative production rates of the charm hadrons $\Sigma_c^{0,++}$ and $\Lambda_c^+$, which allow us to study the hadronisation mechanisms of charm quarks and to give constraints to different hadronisation models. Furthermore, the analysis of D mesons ($D^{0}$, $D^{+}$ and $D^{*+}$) as a function of charged-particle multiplicity and spherocity will be shown, investigating the role of multi-parton interactions. This research is relevant per se and for the mission of the ALICE experiment at the LHC, which is devoted to the study of Quark-Gluon Plasma.