985 resultados para advanced techniques
Resumo:
Les convertisseurs de longueur d’onde sont essentiels pour la réalisation de réseaux de communications optiques à routage en longueur d’onde. Dans la littérature, les convertisseurs de longueur d’onde basés sur le mélange à quatre ondes dans les amplificateurs optiques à semi-conducteur constituent une solution extrêmement intéressante, et ce, en raison de leurs nombreuses caractéristiques nécessaires à l’implémentation de tels réseaux de communications. Avec l’émergence des systèmes commerciaux de détection cohérente, ainsi qu’avec les récentes avancées dans le domaine du traitement de signal numérique, il est impératif d’évaluer la performance des convertisseurs de longueur d’onde, et ce, dans le contexte des formats de modulation avancés. Les objectifs de cette thèse sont : 1) d’étudier la faisabilité des convertisseurs de longueur d’onde basés sur le mélange à quatre ondes dans les amplificateurs optiques à semi-conducteur pour les formats de modulation avancés et 2) de proposer une technique basée sur le traitement de signal numérique afin d’améliorer leur performance. En premier lieu, une étude expérimentale de la conversion de longueur d’onde de formats de modulation d’amplitude en quadrature (quadrature amplitude modulation - QAM) est réalisée. En particulier, la conversion de longueur d’onde de signaux 16-QAM à 16 Gbaud et 64-QAM à 5 Gbaud dans un amplificateur optique à semi-conducteur commercial est réalisée sur toute la bande C. Les résultats démontrent qu’en raison des distorsions non-linéaires induites sur le signal converti, le point d’opération optimal du convertisseur de longueur d’onde est différent de celui obtenu lors de la conversion de longueur d’onde de formats de modulation en intensité. En effet, dans le contexte des formats de modulation avancés, c’est le compromis entre la puissance du signal converti et les non-linéarités induites qui détermine le point d’opération optimal du convertisseur de longueur d’onde. Les récepteurs cohérents permettent l’utilisation de techniques de traitement de signal numérique afin de compenser la détérioration du signal transmis suite à sa détection. Afin de mettre à profit les nouvelles possibilités offertes par le traitement de signal numérique, une technique numérique de post-compensation des distorsions induites sur le signal converti, basée sur une analyse petit-signal des équations gouvernant la dynamique du gain à l’intérieur des amplificateurs optiques à semi-conducteur, est développée. L’efficacité de cette technique est démontrée à l’aide de simulations numériques et de mesures expérimentales de conversion de longueur d’onde de signaux 16-QAM à 10 Gbaud et 64-QAM à 5 Gbaud. Cette méthode permet d’améliorer de façon significative les performances du convertisseur de longueur d’onde, et ce, principalement pour les formats de modulation avancés d’ordre supérieur tel que 64-QAM. Finalement, une étude expérimentale exhaustive de la technique de post-compensation des distorsions induites sur le signal converti est effectuée pour des signaux 64-QAM. Les résultats démontrent que, même en présence d’un signal à bruité à l’entrée du convertisseur de longueur d’onde, la technique proposée améliore toujours la qualité du signal reçu. De plus, une étude du point d’opération optimal du convertisseur de longueur d’onde est effectuée et démontre que celui-ci varie en fonction des pertes optiques suivant la conversion de longueur d’onde. Dans un réseau de communication optique à routage en longueur d’onde, le signal est susceptible de passer par plusieurs étages de conversion de longueur d’onde. Pour cette raison, l’efficacité de la technique de post-compensation est démontrée, et ce pour la première fois dans la littérature, pour deux étages successifs de conversion de longueur d’onde de signaux 64-QAM à 5 Gbaud. Les résultats de cette thèse montrent que les convertisseurs de longueur d’ondes basés sur le mélange à quatre ondes dans les amplificateurs optiques à semi-conducteur, utilisés en conjonction avec des techniques de traitement de signal numérique, constituent une technologie extrêmement prometteuse pour les réseaux de communications optiques modernes à routage en longueur d’onde.
Resumo:
Les systèmes de communication optique avec des formats de modulation avancés sont actuellement l’un des sujets de recherche les plus importants dans le domaine de communication optique. Cette recherche est stimulée par les exigences pour des débits de transmission de donnée plus élevés. Dans cette thèse, on examinera les techniques efficaces pour la modulation avancée avec une détection cohérente, et multiplexage par répartition en fréquence orthogonale (OFDM) et multiples tonalités discrètes (DMT) pour la détection directe et la détection cohérente afin d’améliorer la performance de réseaux optiques. Dans la première partie, nous examinons la rétropropagation avec filtre numérique (DFBP) comme une simple technique d’atténuation de nonlinéarité d’amplificateur optique semiconducteur (SOA) dans le système de détection cohérente. Pour la première fois, nous démontrons expérimentalement l’efficacité de DFBP pour compenser les nonlinéarités générées par SOA dans un système de détection cohérente porteur unique 16-QAM. Nous comparons la performance de DFBP avec la méthode de Runge-Kutta quatrième ordre. Nous examinons la sensibilité de performance de DFBP par rapport à ses paramètres. Par la suite, nous proposons une nouvelle méthode d’estimation de paramètre pour DFBP. Finalement, nous démontrons la transmission de signaux de 16-QAM aux taux de 22 Gbaud sur 80km de fibre optique avec la technique d’estimation de paramètre proposée pour DFBP. Dans la deuxième partie, nous nous concentrons sur les techniques afin d’améliorer la performance des systèmes OFDM optiques en examinent OFDM optiques cohérente (CO-OFDM) ainsi que OFDM optiques détection directe (DDO-OFDM). Premièrement, nous proposons une combinaison de coupure et prédistorsion pour compenser les distorsions nonlinéaires d’émetteur de CO-OFDM. Nous utilisons une interpolation linéaire par morceaux (PLI) pour charactériser la nonlinéarité d’émetteur. Dans l’émetteur nous utilisons l’inverse de l’estimation de PLI pour compenser les nonlinéarités induites à l’émetteur de CO-OFDM. Deuxièmement, nous concevons des constellations irrégulières optimisées pour les systèmes DDO-OFDM courte distance en considérant deux modèles de bruit de canal. Nous démontrons expérimentalement 100Gb/s+ OFDM/DMT avec la détection directe en utilisant les constellations QAM optimisées. Dans la troisième partie, nous proposons une architecture réseaux optiques passifs (PON) avec DDO-OFDM pour la liaison descendante et CO-OFDM pour la liaison montante. Nous examinons deux scénarios pour l’allocations de fréquence et le format de modulation des signaux. Nous identifions la détérioration limitante principale du PON bidirectionnelle et offrons des solutions pour minimiser ses effets.
Resumo:
The quality and the speed for genome sequencing has advanced at the same time that technology boundaries are stretched. This advancement has been divided so far in three generations. The first-generation methods enabled sequencing of clonal DNA populations. The second-generation massively increased throughput by parallelizing many reactions while the third-generation methods allow direct sequencing of single DNA molecules. The first techniques to sequence DNA were not developed until the mid-1970s, when two distinct sequencing methods were developed almost simultaneously, one by Alan Maxam and Walter Gilbert, and the other one by Frederick Sanger. The first one is a chemical method to cleave DNA at specific points and the second one uses ddNTPs, which synthesizes a copy from the DNA chain template. Nevertheless, both methods generate fragments of varying lengths that are further electrophoresed. Moreover, it is important to say that until the 1990s, the sequencing of DNA was relatively expensive and it was seen as a long process. Besides, using radiolabeled nucleotides also compounded the problem through safety concerns and prevented the automation. Some advancements within the first generation include the replacement of radioactive labels by fluorescent labeled ddNTPs and cycle sequencing with thermostable DNA polymerase, which allows automation and signal amplification, making the process cheaper, safer and faster. Another method is Pyrosequencing, which is based on the “sequencing by synthesis” principle. It differs from Sanger sequencing, in that it relies on the detection of pyrophosphate release on nucleotide incorporation. By the end of the last millennia, parallelization of this method started the Next Generation Sequencing (NGS) with 454 as the first of many methods that can process multiple samples, calling it the 2º generation sequencing. Here electrophoresis was completely eliminated. One of the methods that is sometimes used is SOLiD, based on sequencing by ligation of fluorescently dye-labeled di-base probes which competes to ligate to the sequencing primer. Specificity of the di-base probe is achieved by interrogating every 1st and 2nd base in each ligation reaction. The widely used Solexa/Illumina method uses modified dNTPs containing so called “reversible terminators” which blocks further polymerization. The terminator also contains a fluorescent label, which can be detected by a camera. Now, the previous step towards the third generation was in charge of Ion Torrent, who developed a technique that is based in a method of “sequencing-by-synthesis”. Its main feature is the detection of hydrogen ions that are released during base incorporation. Likewise, the third generation takes into account nanotechnology advancements for the processing of unique DNA molecules to a real time synthesis sequencing system like PacBio; and finally, the NANOPORE, projected since 1995, also uses Nano-sensors forming channels obtained from bacteria that conducts the sample to a sensor that allows the detection of each nucleotide residue in the DNA strand. The advancements in terms of technology that we have nowadays have been so quick, that it makes wonder: ¿How do we imagine the next generation?
Resumo:
The atomic-level structure and chemistry of materials ultimately dictate their observed macroscopic properties and behavior. As such, an intimate understanding of these characteristics allows for better materials engineering and improvements in the resulting devices. In our work, two material systems were investigated using advanced electron and ion microscopy techniques, relating the measured nanoscale traits to overall device performance. First, transmission electron microscopy and electron energy loss spectroscopy (TEM-EELS) were used to analyze interfacial states at the semiconductor/oxide interface in wide bandgap SiC microelectronics. This interface contains defects that significantly diminish SiC device performance, and their fundamental nature remains generally unresolved. The impacts of various microfabrication techniques were explored, examining both current commercial and next-generation processing strategies. In further investigations, machine learning techniques were applied to the EELS data, revealing previously hidden Si, C, and O bonding states at the interface, which help explain the origins of mobility enhancement in SiC devices. Finally, the impacts of SiC bias temperature stressing on the interfacial region were explored. In the second system, focused ion beam/scanning electron microscopy (FIB/SEM) was used to reconstruct 3D models of solid oxide fuel cell (SOFC) cathodes. Since the specific degradation mechanisms of SOFC cathodes are poorly understood, FIB/SEM and TEM were used to analyze and quantify changes in the microstructure during performance degradation. Novel strategies for microstructure calculation from FIB-nanotomography data were developed and applied to LSM-YSZ and LSCF-GDC composite cathodes, aged with environmental contaminants to promote degradation. In LSM-YSZ, migration of both La and Mn cations to the grain boundaries of YSZ was observed using TEM-EELS. Few substantial changes however, were observed in the overall microstructure of the cells, correlating with a lack of performance degradation induced by the H2O. Using similar strategies, a series of LSCF-GDC cathodes were analyzed, aged in H2O, CO2, and Cr-vapor environments. FIB/SEM observation revealed considerable formation of secondary phases within these cathodes, and quantifiable modifications of the microstructure. In particular, Cr-poisoning was observed to cause substantial byproduct formation, which was correlated with drastic reductions in cell performance.
Resumo:
This thesis focuses on advanced reconstruction methods and Dual Energy (DE) Computed Tomography (CT) applications for proton therapy, aiming at improving patient positioning and investigating approaches to deal with metal artifacts. To tackle the first goal, an algorithm for post-processing input DE images has been developed. The outputs are tumor- and bone-canceled images, which help in recognising structures in patient body. We proved that positioning error is substantially reduced using contrast enhanced images, thus suggesting the potential of such application. If positioning plays a key role in the delivery, even more important is the quality of planning CT. For that, modern CT scanners offer possibility to tackle challenging cases, like treatment of tumors close to metal implants. Possible approaches for dealing with artifacts introduced by such rods have been investigated experimentally at Paul Scherrer Institut (Switzerland), simulating several treatment plans on an anthropomorphic phantom. In particular, we examined the cases in which none, manual or Iterative Metal Artifact Reduction (iMAR) algorithm were used to correct the artifacts, using both Filtered Back Projection and Sinogram Affirmed Iterative Reconstruction as image reconstruction techniques. Moreover, direct stopping power calculation from DE images with iMAR has also been considered as alternative approach. Delivered dose measured with Gafchromic EBT3 films was compared with the one calculated in Treatment Planning System. Residual positioning errors, daily machine dependent uncertainties and film quenching have been taken into account in the analyses. Although plans with multiple fields seemed more robust than single field, results showed in general better agreement between prescribed and delivered dose when using iMAR, especially if combined with DE approach. Thus, we proved the potential of these advanced algorithms in improving dosimetry for plans in presence of metal implants.
Resumo:
Résumé : La phase haploïde de la spermatogenèse (spermiogenèse) est caractérisée par une modification importante de la structure de la chromatine et un changement de la topologie de l’ADN du spermatide. Les mécanismes par lesquels ce changement se produit ainsi que les protéines impliquées ne sont pas encore complètement élucidés. Mes travaux ont permis d’établir la présence de cassures bicaténaires transitoires pendant ce remodelage par l’essai des comètes et l’électrophorèse en champ pulsé. En procédant à des immunofluorescences sur coupes de tissus et en utilisant un extrait nucléaire hautement actif, la présence de topoisomérases ainsi que de marqueurs de systèmes de réparation a été confirmée. Les protéines de réparation identifiées font partie de systèmes sujets à l’erreur, donc cette refonte structurale de la chromatine pourrait être génétiquement instable et expliquer le biais paternel observé pour les mutations de novo dans de récentes études impliquant des criblages à haut débit. Une technique permettant l’immunocapture spécifique des cassures bicaténaires a été développée et appliquée sur des spermatides murins représentant différentes étapes de différenciation. Les résultats de séquençage à haut débit ont montré que les cassures bicaténaires (hotspots) de la spermiogenèse se produisent en majorité dans l’ADN intergénique, notamment dans les séquences LINE1, l’ADN satellite et les répétions simples. Les hotspots contiennent aussi des motifs de liaisons des protéines des familles FOX et PRDM, dont les fonctions sont entre autres de lier et remodeler localement la chromatine condensée. Aussi, le motif de liaison de la protéine BRCA1 se trouve enrichi dans les hotspots de cassures bicaténaires. Celle-ci agit entre autres dans la réparation de l’ADN par jonction terminale non-homologue (NHEJ) et dans la réparation des adduits ADN-topoisomérase. De façon remarquable, le motif de reconnaissance de la protéine SPO11, impliquée dans la formation des cassures méiotiques, a été enrichi dans les hotspots, ce qui suggère que la machinerie méiotique serait aussi utilisée pendant la spermiogenèse pour la formation des cassures. Enfin, bien que les hotspots se localisent plutôt dans les séquences intergéniques, les gènes ciblés sont impliqués dans le développement du cerveau et des neurones. Ces résultats sont en accord avec l’origine majoritairement paternelle observée des mutations de novo associées aux troubles du spectre de l’autisme et de la schizophrénie et leur augmentation avec l’âge du père. Puisque les processus du remodelage de la chromatine des spermatides sont conservés dans l’évolution, ces résultats suggèrent que le remodelage de la chromatine de la spermiogenèse représente un mécanisme additionnel contribuant à la formation de mutations de novo, expliquant le biais paternel observé pour certains types de mutations.
Resumo:
This Ph.D. project aimed to the development and improvement of analytical solutions for control of quality and authenticity of virgin olive oils. According to this main objective, different research activities were carried out: concerning the quality control of olive oil, two of the official parameters defined by regulations (free acidity and fatty acid ethyl esters) were taken into account, and more sustainable and easier analytical solutions were developed and validated in-house. Regarding authenticity, two different issues were faced: verification of the geographical origin of extra virgin (EVOOs) and virgin olive oils (VOOs), and assessment of soft-deodorized oils illegally mixed with EVOOs. About fatty acid ethyl esters, a revised method based on the application of off-line HPLC-GC-FID (with PTV injector), revising both the preparative phase and the GC injector required in the official method, was developed. Next, the method was in-house validated evaluating several parameters. Concerning free acidity, a portable system suitable for in-situ measurements of VOO free acidity was developed and in-house validated. Its working principle is based on the estimation of the olive oil free acidity by measuring the conductance of an emulsion between a hydro-alcoholic solution and the sample to be tested. The procedure is very quick and easy and, therefore, suitable for people without specific training. Another study developed during the Ph.D. was about the application of flash gas chromatography for volatile compounds analysis, combined with untargeted chemometric data elaborations, for discrimination of EVOOs and VOOs of different geographical origin. A set of 210 samples coming from different EU member states and extra-EU countries were collected and analyzed. Data were elaborated applying two different classification techniques, one linear (PLS-DA) and one non-linear (ANN). Finally, a preliminary study about the application of GC-IMS (Gas Chromatograph - Ion Mobility Spectrometer) for assessment of soft-deodorized olive oils was carried out.
Resumo:
Molecular radiotherapy (MRT) is a fast developing and promising treatment for metastasised neuroendocrine tumours. Efficacy of MRT is based on the capability to selectively "deliver" radiation to tumour cells, minimizing administered dose to normal tissues. Outcome of MRT depends on the individual patient characteristics. For that reason, personalized treatment planning is important to improve outcomes of therapy. Dosimetry plays a key role in this setting, as it is the main physical quantity related to radiation effects on cells. Dosimetry in MRT consists in a complex series of procedures ranging from imaging quantification to dose calculation. This doctoral thesis focused on several aspects concerning the clinical implementation of absorbed dose calculations in MRT. Accuracy of SPECT/CT quantification was assessed in order to determine the optimal reconstruction parameters. A model of PVE correction was developed in order to improve the activity quantification in small volume, such us lesions in clinical patterns. Advanced dosimetric methods were compared with the aim of defining the most accurate modality, applicable in clinical routine. Also, for the first time on a large number of clinical cases, the overall uncertainty of tumour dose calculation was assessed. As part of the MRTDosimetry project, protocols for calibration of SPECT/CT systems and implementation of dosimetry were drawn up in order to provide standard guidelines to the clinics offering MRT. To estimate the risk of experiencing radio-toxicity side effects and the chance of inducing damage on neoplastic cells is crucial for patient selection and treatment planning. In this thesis, the NTCP and TCP models were derived based on clinical data as help to clinicians to decide the pharmaceutical dosage in relation to the therapy control and the limitation of damage to healthy tissues. Moreover, a model for tumour response prediction based on Machine Learning analysis was developed.
Resumo:
Colourants are substances used to change the colour of something, and are classified in three typology of colorants: a) pigments, b) dyes, and c) lakes and hybrid pigments. Their identification is very important when studying cultural heritage; it gives information about the artistic technique, can help in dating, and offers insights on the condition of the object. Besides, the study of the degradation phenomena constitutes a framework for the preventive conservation strategies, provides evidence of the object's original appearance, and contributes to the authentication of works of art. However, the complexity of these systems makes it impossible to achieve a complete understanding using a single technique, making necessary a multi-analytical approach. This work focuses on the set-up and application of advanced spectroscopic methods for the study of colourants in cultural heritage. The first chapter presents the identification of modern synthetic organic pigments using Metal Underlayer-ATR (MU-ATR), and the characterization of synthetic dyes extracted from wool fibres using a combination of Thin Layer Chromatography (TLC) coupled to MU-ATR using AgI@Au plates. The second chapter presents the study of the effect of metallic Ag in the photo-oxidation process of orpiment, and the influence of the different factors, such as light and relative humidity. We used a combination of vibrational and synchrotron radiation-based X-ray microspectroscopy techniques: µ-ATR-FT-IR, µ-Raman, SR-µ-XRF, µ-XANES at S K-, Ag L3- and As K-edges and SR-µ-XRD. The third chapter presents the study of metal carboxylates in paintings, specifically on the formation of Zn and Pb carboxylates in three different binders: stand linseed oil, whole egg, and beeswax. We used micro-ATR-FT-IR, macro FT-IR in total reflection (rMA-FT-IR), portable Near-Infrared spectroscopy (NIR), macro X-ray Powder Diffraction (MA-XRPD), XRPD, and Gas Chromatography Mass-Spectrometry (GC-MS). For the data processing, we explored the data from rMA-FT-IR and NIR with the Principal Component Analysis (PCA).
Resumo:
Besides increasing the share of electric and hybrid vehicles, in order to comply with more stringent environmental protection limitations, in the mid-term the auto industry must improve the efficiency of the internal combustion engine and the well to wheel efficiency of the employed fuel. To achieve this target, a deeper knowledge of the phenomena that influence the mixture formation and the chemical reactions involving new synthetic fuel components is mandatory, but complex and time intensive to perform purely by experimentation. Therefore, numerical simulations play an important role in this development process, but their use can be effective only if they can be considered accurate enough to capture these variations. The most relevant models necessary for the simulation of the reacting mixture formation and successive chemical reactions have been investigated in the present work, with a critical approach, in order to provide instruments to define the most suitable approaches also in the industrial context, which is limited by time constraints and budget evaluations. To overcome these limitations, new methodologies have been developed to conjugate detailed and simplified modelling techniques for the phenomena involving chemical reactions and mixture formation in non-traditional conditions (e.g. water injection, biofuels etc.). Thanks to the large use of machine learning and deep learning algorithms, several applications have been revised or implemented, with the target of reducing the computing time of some traditional tasks by orders of magnitude. Finally, a complete workflow leveraging these new models has been defined and used for evaluating the effects of different surrogate formulations of the same experimental fuel on a proof-of-concept GDI engine model.
Resumo:
Non Destructive Testing (NDT) and Structural Health Monitoring (SHM) are becoming essential in many application contexts, e.g. civil, industrial, aerospace etc., to reduce structures maintenance costs and improve safety. Conventional inspection methods typically exploit bulky and expensive instruments and rely on highly demanding signal processing techniques. The pressing need to overcome these limitations is the common thread that guided the work presented in this Thesis. In the first part, a scalable, low-cost and multi-sensors smart sensor network is introduced. The capability of this technology to carry out accurate modal analysis on structures undergoing flexural vibrations has been validated by means of two experimental campaigns. Then, the suitability of low-cost piezoelectric disks in modal analysis has been demonstrated. To enable the use of this kind of sensing technology in such non conventional applications, ad hoc data merging algorithms have been developed. In the second part, instead, imaging algorithms for Lamb waves inspection (namely DMAS and DS-DMAS) have been implemented and validated. Results show that DMAS outperforms the canonical Delay and Sum (DAS) approach in terms of image resolution and contrast. Similarly, DS-DMAS can achieve better results than both DMAS and DAS by suppressing artefacts and noise. To exploit the full potential of these procedures, accurate group velocity estimations are required. Thus, novel wavefield analysis tools that can address the estimation of the dispersion curves from SLDV acquisitions have been investigated. An image segmentation technique (called DRLSE) was exploited in the k-space to draw out the wavenumber profile. The DRLSE method was compared with compressive sensing methods to extract the group and phase velocity information. The validation, performed on three different carbon fibre plates, showed that the proposed solutions can accurately determine the wavenumber and velocities in polar coordinates at multiple excitation frequencies.
Resumo:
With the advent of new technologies it is increasingly easier to find data of different nature from even more accurate sensors that measure the most disparate physical quantities and with different methodologies. The collection of data thus becomes progressively important and takes the form of archiving, cataloging and online and offline consultation of information. Over time, the amount of data collected can become so relevant that it contains information that cannot be easily explored manually or with basic statistical techniques. The use of Big Data therefore becomes the object of more advanced investigation techniques, such as Machine Learning and Deep Learning. In this work some applications in the world of precision zootechnics and heat stress accused by dairy cows are described. Experimental Italian and German stables were involved for the training and testing of the Random Forest algorithm, obtaining a prediction of milk production depending on the microclimatic conditions of the previous days with satisfactory accuracy. Furthermore, in order to identify an objective method for identifying production drops, compared to the Wood model, typically used as an analytical model of the lactation curve, a Robust Statistics technique was used. Its application on some sample lactations and the results obtained allow us to be confident about the use of this method in the future.
Resumo:
Among all, the application of nanomaterials in biomedical research and most recently in the environmental one has opened the fields of nanomedicine and nanoremediation. Sensing methods based on fluorescence optical probe are generally requested for their selectivity, sensitivity. However, most imaging methods in literature rely on a fluorescent covalent labelling of the system. Therefore, the main aim of this project was to synthetise a biocompatible fluorogenic hyaluronan probe (HA) polymer functionalised with a rhomadine B (RB) moieties and study its behaviour as an optical probe with different materials with microscopy techniques. A derivatization of HA with RB (HA-RB) was successfully obtained providing a photophysical characterization showing a particular fluorescence mechanism of the probe. Firstly, we tested the interaction with different lab-grade micro and nanoplastics in water. Thanks to the peculiar photophysical behaviour of the probe nanoplastics can be detected with confocal microscopy and more interestingly their nature can be discriminated based on the fluorescence lifetime decay with FLIM microscopy. After, the interaction of a model plant derived metabolic enzyme GAPC1 undergoing oxidative-triggered aggregation was explored with the HA-RB. We highlighted the probe interaction with the protein even at early stage of the kinetic. Moreover, nanoparticle tracking analysis (NTA) experiment demonstrates that the probe is in fact able to interact with the small pre-aggregates in the early stage of the aggregation kinetic. Ultimately, we focused on the possibility to apply the probe in a super resolution microscopy technique, PALM, exploiting its aspecific interaction to characterize the surface topography of PTFE polydisperse microplastics. Optimal conditions were reached at high concentration of the probe (70 nM) where 0.5-5 nM is always advisable for this technique. Thanks to the polymeric nature and fluorescence mechanism of the probe, this technique was able to reveal features of PTFE surface under the diffraction limit (< 250 nm).
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
This Doctoral Thesis aims at studying, developing, and characterizing cutting edge equipment for EMC measurements and proposing innovative and advanced power line filter design techniques. This document summarizes a three-year work, is strictly industry oriented and relies on EMC standards and regulations. It contains the main results, findings, and effort with the purpose of bringing innovative contributions at the scientific community. Conducted emissions interferences are usually suppressed with power line filters. These filters are composed by common mode chokes, X capacitors and Y capacitors in order to mitigate both the differential mode and common mode noise, which compose the overall conducted emissions. However, even at present days, available power line filter design techniques show several disadvantages. First of all, filters are designed to be implemented in ideal 50 Ω systems, condition which is far away from reality. Then, the attenuation introduced by the filter for common or differential mode noise is analyzed independently, without considering the possible mode conversion that can be produced by impedance mismatches, or asymmetries in either the power line filter itself or the equipment under test. Ultimately, the instrumentation used to perform conducted emissions measurement is, in most cases, not adequate. All these factors lead to an inaccurate design, contributing at increasing the size of the filter, making it more expensive and less performant than it should be.