45 resultados para Data fusion applications
Resumo:
In this thesis two major topics inherent with medical ultrasound images are addressed: deconvolution and segmentation. In the first case a deconvolution algorithm is described allowing statistically consistent maximum a posteriori estimates of the tissue reflectivity to be restored. These estimates are proven to provide a reliable source of information for achieving an accurate characterization of biological tissues through the ultrasound echo. The second topic involves the definition of a semi automatic algorithm for myocardium segmentation in 2D echocardiographic images. The results show that the proposed method can reduce inter- and intra observer variability in myocardial contours delineation and is feasible and accurate even on clinical data.
Resumo:
This thesis work has been developed in the framework of a new experimental campaign, proposed by the NUCL-EX Collaboration (INFN III Group), in order to progress in the understanding of the statistical properties of light nuclei, at excitation energies above particle emission threshold, by measuring exclusive data from fusion-evaporation reactions. The determination of the nuclear level density in the A~20 region, the understanding of the statistical behavior of light nuclei with excitation energies ~3 A.MeV, and the measurement of observables linked to the presence of cluster structures of nuclear excited levels are the main physics goals of this work. On the theory side, the contribution to this project given by this work lies in the development of a dedicated Monte-Carlo Hauser-Feshbach code for the evaporation of the compound nucleus. The experimental part of this thesis has consisted in the participation to the measurement 12C+12C at 95 MeV beam energy, at Laboratori Nazionali di Legnaro - INFN, using the GARFIELD+Ring Counter(RCo) set-up, from the beam-time request to the data taking, data reduction, detector calibrations and data analysis. Different results of the data analysis are presented in this thesis, together with a theoretical study of the system, performed with the new statistical decay code. As a result of this work, constraints on the nuclear level density at high excitation energy for light systems ranging from C up to Mg are given. Moreover, pre-equilibrium effects, tentatively interpreted as alpha-clustering effects, are put in evidence, both in the entrance channel of the reaction and in the dissipative dynamics on the path towards thermalisation.
Resumo:
Wireless Sensor Networks (WSNs) are getting wide-spread attention since they became easily accessible with their low costs. One of the key elements of WSNs is distributed sensing. When the precise location of a signal of interest is unknown across the monitored region, distributing many sensors randomly/uniformly may yield with a better representation of the monitored random process than a traditional sensor deployment. In a typical WSN application the data sensed by nodes is usually sent to one (or more) central device, denoted as sink, which collects the information and can either act as a gateway towards other networks (e.g. Internet), where data can be stored, or be processed in order to command the actuators to perform special tasks. In such a scenario, a dense sensor deployment may create bottlenecks when many nodes competing to access the channel. Even though there are mitigation methods on the channel access, concurrent (parallel) transmissions may occur. In this study, always on the scope of monitoring applications, the involved development progress of two industrial projects with dense sensor deployments (eDIANA Project funded by European Commission and Centrale Adritica Project funded by Coop Italy) and the measurement results coming from several different test-beds evoked the necessity of a mathematical analysis on concurrent transmissions. To the best of our knowledge, in the literature there is no mathematical analysis of concurrent transmission in 2.4 GHz PHY of IEEE 802.15.4. In the thesis, experience stories of eDIANA and Centrale Adriatica Projects and a mathematical analysis of concurrent transmissions starting from O-QPSK chip demodulation to the packet reception rate with several different types of theoretical demodulators, are presented. There is a very good agreement between the measurements so far in the literature and the mathematical analysis.
Resumo:
In the last decades mesenchymal stromal cells (MSC), intriguing for their multilineage plasticity and their proliferation activity in vitro, have been intensively studied for innovative therapeutic applications. In the first project, a new method to expand in vitro adipose derived-MSC (ASC) while maintaining their progenitor properties have been investigated. ASC are cultured in the same flask for 28 days in order to allow cell-extracellular matrix and cell-cell interactions and to mimic in vivo niche. ASC cultured with this method (Unpass cells) were compared with ASC cultured under classic condition (Pass cells). Unpass and Pass cells were characterized in terms of clonogenicity, proliferation, stemness gene expression, differentiation in vitro and in vivo and results obtained showed that Unpass cells preserve their stemness and phenotypic properties suggesting a fundamental role of the niche in the maintenance of ASC progenitor features. Our data suggests alternative culture conditions for the expansion of ASC ex vivo which could increase the performance of ASC in regenerative applications. In vivo MSC tracking is essential in order to assess their homing and migration. Super-paramagnetic iron oxide nanoparticles (SPION) have been used to track MSC in vivo due to their biocompatibility and traceability by MRI. In the second project a new generation of magnetic nanoparticles (MNP) used to label MSC were tested. These MNP have been functionalized with hyperbranched poly(epsilon-lysine)dendrons (G3CB) in order to interact with membrane glycocalix of the cells avoiding their internalization and preventing any cytotoxic effects. In literature it is reported that labeling of MSC with SPION takes long time of incubation. In our experiments after 15min of incubation with G3CB-MNP more then 80% of MSC were labeled. The data obtained from cytotoxic, proliferation and differentiation assay showed that labeling does not affect MSC properties suggesting a potential application of G3CB nano-particles in regenerative medicine.
Resumo:
This study focuses on the use of metabonomics applications in measuring fish freshness in various biological species and in evaluating how they are stored. This metabonomic approach is innovative and is based upon molecular profiling through nuclear magnetic resonance (NMR). On one hand, the aim is to ascertain if a type of fish has maintained, within certain limits, its sensory and nutritional characteristics after being caught; and on the second, the research observes the alterations in the product’s composition. The spectroscopic data obtained through experimental nuclear magnetic resonance, 1H-NMR, of the molecular profiles of the fish extracts are compared with those obtained on the same samples through analytical and conventional methods now in practice. These second methods are used to obtain chemical indices of freshness through biochemical and microbial degradation of the proteic nitrogen compounds and not (trimethylamine, N-(CH3)3, nucleotides, amino acids, etc.). At a later time, a principal components analysis (PCA) and a linear discriminant analysis (PLS-DA) are performed through a metabonomic approach to condense the temporal evolution of freshness into a single parameter. In particular, the first principal component (PC1) under both storage conditions (4 °C and 0 °C) represents the component together with the molecular composition of the samples (through 1H-NMR spectrum) evolving during storage with a very high variance. The results of this study give scientific evidence supporting the objective elements evaluating the freshness of fish products showing those which can be labeled “fresh fish.”
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Future wireless communications systems are expected to be extremely dynamic, smart and capable to interact with the surrounding radio environment. To implement such advanced devices, cognitive radio (CR) is a promising paradigm, focusing on strategies for acquiring information and learning. The first task of a cognitive systems is spectrum sensing, that has been mainly studied in the context of opportunistic spectrum access, in which cognitive nodes must implement signal detection techniques to identify unused bands for transmission. In the present work, we study different spectrum sensing algorithms, focusing on their statistical description and evaluation of the detection performance. Moving from traditional sensing approaches we consider the presence of practical impairments, and analyze algorithm design. Far from the ambition of cover the broad spectrum of spectrum sensing, we aim at providing contributions to the main classes of sensing techniques. In particular, in the context of energy detection we studied the practical design of the test, considering the case in which the noise power is estimated at the receiver. This analysis allows to deepen the phenomenon of the SNR wall, providing the conditions for its existence and showing that presence of the SNR wall is determined by the accuracy of the noise power estimation process. In the context of the eigenvalue based detectors, that can be adopted by multiple sensors systems, we studied the practical situation in presence of unbalances in the noise power at the receivers. Then, we shift the focus from single band detectors to wideband sensing, proposing a new approach based on information theoretic criteria. This technique is blind and, requiring no threshold setting, can be adopted even if the statistical distribution of the observed data in not known exactly. In the last part of the thesis we analyze some simple cooperative localization techniques based on weighted centroid strategies.
Resumo:
The dynamics of a passive back-to-back test rig have been characterised, leading to a multi-coordinate approach for the analysis of arbitrary test configurations. Universal joints have been introduced into a typical pre-loaded back-to-back system in order to produce an oscillating torsional moment in a test specimen. Two different arrangements have been investigated using a frequency-based sub-structuring approach: the receptance method. A numerical model has been developed in accordance with this theory, allowing interconnection of systems with two-coordinates and closed multi-loop schemes. The model calculates the receptance functions and modal and deflected shapes of a general system. Closed form expressions of the following individual elements have been developed: a servomotor, damped continuous shaft and a universal joint. Numerical results for specific cases have been compared with published data in literature and experimental measurements undertaken in the present work. Due to the complexity of the universal joint and its oscillating dynamic effects, a more detailed analysis of this component has been developed. Two models have been presented. The first represents the joint as two inertias connected by a massless cross-piece. The second, derived by the dynamic analysis of a spherical four-link mechanism, considers the contribution of the floating element and its gyroscopic effects. An investigation into non-linear behaviour has led to a time domain model that utilises the Runge-Kutta fourth order method for resolution of the dynamic equations. It has been demonstrated that the torsional receptances of a universal joint, derived using the simple model, result in representation of the joint as an equivalent variable inertia. In order to verify the model, a test rig has been built and experimental validation undertaken. The variable inertia of a universal joint has lead to a novel application of the component as a passive device for the balancing of inertia variations in slider-crank mechanisms.
Resumo:
The Internet of Things (IoT) is the next industrial revolution: we will interact naturally with real and virtual devices as a key part of our daily life. This technology shift is expected to be greater than the Web and Mobile combined. As extremely different technologies are needed to build connected devices, the Internet of Things field is a junction between electronics, telecommunications and software engineering. Internet of Things application development happens in silos, often using proprietary and closed communication protocols. There is the common belief that only if we can solve the interoperability problem we can have a real Internet of Things. After a deep analysis of the IoT protocols, we identified a set of primitives for IoT applications. We argue that each IoT protocol can be expressed in term of those primitives, thus solving the interoperability problem at the application protocol level. Moreover, the primitives are network and transport independent and make no assumption in that regard. This dissertation presents our implementation of an IoT platform: the Ponte project. Privacy issues follows the rise of the Internet of Things: it is clear that the IoT must ensure resilience to attacks, data authentication, access control and client privacy. We argue that it is not possible to solve the privacy issue without solving the interoperability problem: enforcing privacy rules implies the need to limit and filter the data delivery process. However, filtering data require knowledge of how the format and the semantics of the data: after an analysis of the possible data formats and representations for the IoT, we identify JSON-LD and the Semantic Web as the best solution for IoT applications. Then, this dissertation present our approach to increase the throughput of filtering semantic data by a factor of ten.
Resumo:
Assessment of the integrity of structural components is of great importance for aerospace systems, land and marine transportation, civil infrastructures and other biological and mechanical applications. Guided waves (GWs) based inspections are an attractive mean for structural health monitoring. In this thesis, the study and development of techniques for GW ultrasound signal analysis and compression in the context of non-destructive testing of structures will be presented. In guided wave inspections, it is necessary to address the problem of the dispersion compensation. A signal processing approach based on frequency warping was adopted. Such operator maps the frequencies axis through a function derived by the group velocity of the test material and it is used to remove the dependence on the travelled distance from the acquired signals. Such processing strategy was fruitfully applied for impact location and damage localization tasks in composite and aluminum panels. It has been shown that, basing on this processing tool, low power embedded system for GW structural monitoring can be implemented. Finally, a new procedure based on Compressive Sensing has been developed and applied for data reduction. Such procedure has also a beneficial effect in enhancing the accuracy of structural defects localization. This algorithm uses the convolutive model of the propagation of ultrasonic guided waves which takes advantage of a sparse signal representation in the warped frequency domain. The recovery from the compressed samples is based on an alternating minimization procedure which achieves both an accurate reconstruction of the ultrasonic signal and a precise estimation of waves time of flight. Such information is used to feed hyperbolic or elliptic localization procedures, for accurate impact or damage localization.
Resumo:
This thesis was focused on the investigation of the linear optical properties of novel two photon absorbers for biomedical applications. Substituted imidazole and imidazopyridine derivatives, and organic dendrimers were studied as potential fluorophores for two photon bioimaging. The results obtained showed superior luminescence properties for sulphonamido imidazole derivatives compared to other substituted imidazoles. Imidazo[1,2-a]pyridines exhibited an important dependence on the substitution pattern of their luminescence properties. Substitution at imidazole ring led to a higher fluorescence yield than the substitution at the pyridine one. Bis-imidazo[1,2-a]pyridines of Donor-Acceptor-Donor type were examined. Bis-imidazo[1,2-a]pyridines dimerized at C3 position had better luminescence properties than those dimerized at C5, displaying high emission yields and important 2PA cross sections. Phosphazene-based dendrimers with fluorene branches and cationic charges on the periphery were also examined. Due to aggregation phenomena in polar solvents, the dendrimers registered a significant loss of luminescence with respect to fluorene chromophore model. An improved design of more rigid chromophores yields enhanced luminescence properties which, connected to large 2PA cross-sections, make this compounds valuable as fluorophores in bioimaging. The photophysical study of several ketocoumarine initiators, designed for the fabrication of small dimension prostheses by two photon polymerization (2PP) was carried out. The compounds showed low emission yields, indicative of a high population of the triplet excited state, which is the active state in producing the reactive species. Their efficiency in 2PP was proved by fabrication of microstructures and their biocompatibility was tested in the collaborator’s laboratory. In the frame of the 2PA photorelease of drugs, three fluorene-based dyads have been investigated. They were designed to release the gamma-aminobutyric acid via two photon induced electron transfer. The experimental data in polar solvents showed a fast electron transfer followed by an almost equally fast back electron transfer process, which indicate a poor optimization of the system.
Resumo:
Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.
Resumo:
The recent advent of Next-generation sequencing technologies has revolutionized the way of analyzing the genome. This innovation allows to get deeper information at a lower cost and in less time, and provides data that are discrete measurements. One of the most important applications with these data is the differential analysis, that is investigating if one gene exhibit a different expression level in correspondence of two (or more) biological conditions (such as disease states, treatments received and so on). As for the statistical analysis, the final aim will be statistical testing and for modeling these data the Negative Binomial distribution is considered the most adequate one especially because it allows for "over dispersion". However, the estimation of the dispersion parameter is a very delicate issue because few information are usually available for estimating it. Many strategies have been proposed, but they often result in procedures based on plug-in estimates, and in this thesis we show that this discrepancy between the estimation and the testing framework can lead to uncontrolled first-type errors. We propose a mixture model that allows each gene to share information with other genes that exhibit similar variability. Afterwards, three consistent statistical tests are developed for differential expression analysis. We show that the proposed method improves the sensitivity of detecting differentially expressed genes with respect to the common procedures, since it is the best one in reaching the nominal value for the first-type error, while keeping elevate power. The method is finally illustrated on prostate cancer RNA-seq data.
Resumo:
The present dissertation aims to explore, theoretically and experimentally, the problems and the potential advantages of different types of power converters for “Smart Grid” applications, with particular emphasis on multi-level architectures, which are attracting a rising interest even for industrial requests. The models of the main multilevel architectures (Diode-Clamped and Cascaded) are shown. The best suited modulation strategies to function as a network interface are identified. In particular, the close correlation between PWM (Pulse Width Modulation) approach and SVM (Space Vector Modulation) approach is highlighted. An innovative multilevel topology called MMC (Modular Multilevel Converter) is investigated, and the single-phase, three-phase and "back to back" configurations are analyzed. Specific control techniques that can manage, in an appropriate way, the charge level of the numerous capacitors and handle the power flow in a flexible way are defined and experimentally validated. Another converter that is attracting interest in “Power Conditioning Systems” field is the “Matrix Converter”. Even in this architecture, the output voltage is multilevel. It offers an high quality input current, a bidirectional power flow and has the possibility to control the input power factor (i.e. possibility to participate to active and reactive power regulations). The implemented control system, that allows fast data acquisition for diagnostic purposes, is described and experimentally verified.
Resumo:
This work is focused on the study of saltwater intrusion in coastal aquifers, and in particular on the realization of conceptual schemes to evaluate the risk associated with it. Saltwater intrusion depends on different natural and anthropic factors, both presenting a strong aleatory behaviour, that should be considered for an optimal management of the territory and water resources. Given the uncertainty of problem parameters, the risk associated with salinization needs to be cast in a probabilistic framework. On the basis of a widely adopted sharp interface formulation, key hydrogeological problem parameters are modeled as random variables, and global sensitivity analysis is used to determine their influence on the position of saltwater interface. The analyses presented in this work rely on an efficient model reduction technique, based on Polynomial Chaos Expansion, able to combine the best description of the model without great computational burden. When the assumptions of classical analytical models are not respected, and this occurs several times in the applications to real cases of study, as in the area analyzed in the present work, one can adopt data-driven techniques, based on the analysis of the data characterizing the system under study. It follows that a model can be defined on the basis of connections between the system state variables, with only a limited number of assumptions about the "physical" behaviour of the system.