651 resultados para SMOOTHING SPLINE
Resumo:
In recent years, radars have been used in many applications such as precision agriculture and advanced driver assistant systems. Optimal techniques for the estimation of the number of targets and of their coordinates require solving multidimensional optimization problems entailing huge computational efforts. This has motivated the development of sub-optimal estimation techniques able to achieve good accuracy at a manageable computational cost. Another technical issue in advanced driver assistant systems is the tracking of multiple targets. Even if various filtering techniques have been developed, new efficient and robust algorithms for target tracking can be devised exploiting a probabilistic approach, based on the use of the factor graph and the sum-product algorithm. The two contributions provided by this dissertation are the investigation of the filtering and smoothing problems from a factor graph perspective and the development of efficient algorithms for two and three-dimensional radar imaging. Concerning the first contribution, a new factor graph for filtering is derived and the sum-product rule is applied to this graphical model; this allows to interpret known algorithms and to develop new filtering techniques. Then, a general method, based on graphical modelling, is proposed to derive filtering algorithms that involve a network of interconnected Bayesian filters. Finally, the proposed graphical approach is exploited to devise a new smoothing algorithm. Numerical results for dynamic systems evidence that our algorithms can achieve a better complexity-accuracy tradeoff and tracking capability than other techniques in the literature. Regarding radar imaging, various algorithms are developed for frequency modulated continuous wave radars; these algorithms rely on novel and efficient methods for the detection and estimation of multiple superimposed tones in noise. The accuracy achieved in the presence of multiple closely spaced targets is assessed on the basis of both synthetically generated data and of the measurements acquired through two commercial multiple-input multiple-output radars.
Resumo:
The surface of the Earth is subjected to vertical deformations caused by geophysical and geological processes which can be monitored by Global Positioning System (GPS) observations. The purpose of this work is to investigate GPS height time series to identify interannual signals affecting the Earth’s surface over the European and Mediterranean area, during the period 2001-2019. Thirty-six homogeneously distributed GPS stations were selected from the online dataset made available by the Nevada Geodetic Laboratory (NGL) on the basis of the length and quality of the data series. The Principal Component Analysis (PCA) is the technique applied to extract the main patterns of the space and time variability of the GPS Up coordinate. The time series were studied by means of a frequency analysis using a periodogram and the real-valued Morlet wavelet. The periodogram is used to identify the dominant frequencies and the spectral density of the investigated signals; the second one is applied to identify the signals in the time domain and the relevant periodicities. This study has identified, over European and Mediterranean area, the presence of interannual non-linear signals with a period of 2-to-4 years, possibly related to atmospheric and hydrological loading displacements and to climate phenomena, such as El Niño Southern Oscillation (ENSO). A clear signal with a period of about six years is present in the vertical component of the GPS time series, likely explainable by the gravitational coupling between the Earth’s mantle and the inner core. Moreover, signals with a period in the order of 8-9 years, might be explained by mantle-inner core gravity coupling and the cycle of the lunar perigee, and a signal of 18.6 years, likely associated to lunar nodal cycle, were identified through the wavelet spectrum. However, these last two signals need further confirmation because the present length of the GPS time series is still too short when compared to the periods involved.
Resumo:
Additive Manufacturing (AM) is nowadays considered an important alternative to traditional manufacturing processes. AM technology shows several advantages in literature as design flexibility, and its use increases in automotive, aerospace and biomedical applications. As a systematic literature review suggests, AM is sometimes coupled with voxelization, mainly for representation and simulation purposes. Voxelization can be defined as a volumetric representation technique based on the model’s discretization with hexahedral elements, as occurs with pixels in the 2D image. Voxels are used to simplify geometric representation, store intricated details of the interior and speed-up geometric and algebraic manipulation. Compared to boundary representation used in common CAD software, voxel’s inherent advantages are magnified in specific applications such as lattice or topologically structures for visualization or simulation purposes. Those structures can only be manufactured with AM employment due to their complex topology. After an accurate review of the existent literature, this project aims to exploit the potential of the voxelization algorithm to develop optimized Design for Additive Manufacturing (DfAM) tools. The final aim is to manipulate and support mechanical simulations of lightweight and optimized structures that should be ready to be manufactured with AM with particular attention to automotive applications. A voxel-based methodology is developed for efficient structural simulation of lattice structures. Moreover, thanks to an optimized smoothing algorithm specific for voxel-based geometries, a topological optimized and voxelized structure can be transformed into a surface triangulated mesh file ready for the AM process. Moreover, a modified panel code is developed for simple CFD simulations using the voxels as a discretization unit to understand the fluid-dynamics performances of industrial components for preliminary aerodynamic performance evaluation. The developed design tools and methodologies perfectly fit the automotive industry’s needs to accelerate and increase the efficiency of the design workflow from the conceptual idea to the final product.
Resumo:
The COVID-19 pandemic, sparked by the SARS-CoV-2 virus, stirred global comparisons to historical pandemics. Initially presenting a high mortality rate, it later stabilized globally at around 0.5-3%. Patients manifest a spectrum of symptoms, necessitating efficient triaging for appropriate treatment strategies, ranging from symptomatic relief to antivirals or monoclonal antibodies. Beyond traditional approaches, emerging research suggests a potential link between COVID-19 severity and alterations in gut microbiota composition, impacting inflammatory responses. However, most studies focus on severe hospitalized cases without standardized criteria for severity. Addressing this gap, the first study in this thesis spans diverse COVID-19 severity levels, utilizing 16S rRNA amplicon sequencing on fecal samples from 315 subjects. The findings highlight significant microbiota differences correlated with severity. Machine learning classifiers, including a multi-layer convoluted neural network, demonstrated the potential of microbiota compositional data to predict patient severity, achieving an 84.2% mean balanced accuracy starting one week post-symptom onset. These preliminary results underscore the gut microbiota's potential as a biomarker in clinical decision-making for COVID-19. The second study delves into mild COVID-19 cases, exploring their implications for ‘long COVID’ or Post-Acute COVID-19 Syndrome (PACS). Employing longitudinal analysis, the study unveils dynamic shifts in microbial composition during the acute phase, akin to severe cases. Innovative techniques, including network approaches and spline-based longitudinal analysis, were deployed to assess microbiota dynamics and potential associations with PACS. The research suggests that even in mild cases, similar mechanisms to hospitalized patients are established regarding changes in intestinal microbiota during the acute phase of the infection. These findings lay the foundation for potential microbiota-targeted therapies to mitigate inflammation, potentially preventing long COVID symptoms in the broader population. In essence, these studies offer valuable insights into the intricate relationships between COVID-19 severity, gut microbiota, and the potential for innovative clinical applications.
Resumo:
Activation functions within neural networks play a crucial role in Deep Learning since they allow to learn complex and non-trivial patterns in the data. However, the ability to approximate non-linear functions is a significant limitation when implementing neural networks in a quantum computer to solve typical machine learning tasks. The main burden lies in the unitarity constraint of quantum operators, which forbids non-linearity and poses a considerable obstacle to developing such non-linear functions in a quantum setting. Nevertheless, several attempts have been made to tackle the realization of the quantum activation function in the literature. Recently, the idea of the QSplines has been proposed to approximate a non-linear activation function by implementing the quantum version of the spline functions. Yet, QSplines suffers from various drawbacks. Firstly, the final function estimation requires a post-processing step; thus, the value of the activation function is not available directly as a quantum state. Secondly, QSplines need many error-corrected qubits and a very long quantum circuits to be executed. These constraints do not allow the adoption of the QSplines on near-term quantum devices and limit their generalization capabilities. This thesis aims to overcome these limitations by leveraging hybrid quantum-classical computation. In particular, a few different methods for Variational Quantum Splines are proposed and implemented, to pave the way for the development of complete quantum activation functions and unlock the full potential of quantum neural networks in the field of quantum machine learning.
Resumo:
Questa tesi elabora un sistema di filtraggio delle pseudodistanze calcolate da un ricevitore GPS, al fine di ottenere un miglior posizionamento. Infatti, dopo aver analizzato e quindi filtrato i dati in ingresso tramite l’impiego delle misure di fase, si può ottenere una precisione nell’ordine dei centimetri. La tecnica di filtraggio prende il nome di carrier-smoothing e permette di selezionare le misure di pseudorange mediante le misure di ADR (Accumulated Delta Range) grazie all’applicazione di una struttura a filtro complementare. Per svolgere questo tipo di lavoro, non è stato necessario l’utilizzo di un ricevitore GPS fisico, ma è stato realizzato, nell’ambiente Matlab/Simulink, un simulatore GNSS (Global Navigation Satellite System) che sostanzialmente emula quello che farebbe un ricevitore.