888 resultados para WAVELET
Resumo:
Abstract Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.
Resumo:
Programa de doctorado: Cibernética y telecomunicaciones
Resumo:
Programa de doctorado: Actividad Física, Salud y Rendimiento Deportivo
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
Resumo:
Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Background: l’epilessia è una malattia cerebrale che colpisce oggigiorno circa l’1% della popolazione mondiale e causa, a chi ne soffre, convulsioni ricorrenti e improvvise che danneggiano la vita quotidiana del paziente. Le convulsioni sono degli eventi che bloccano istantaneamente la normale attività cerebrale; inoltre differiscono tra i pazienti e, perciò, non esiste un trattamento comune generalizzato. Solitamente, medici neurologi somministrano farmaci, e, in rari casi, l’epilessia è trattata con operazioni neurochirurgiche. Tuttavia, le operazioni hanno effetti positivi nel ridurre le crisi, ma raramente riescono a eliminarle del tutto. Negli ultimi anni, nel campo della ricerca scientifica è stato provato che il segnale EEG contiene informazioni utili per diagnosticare l'arrivo di un attacco epilettico. Inoltre, diversi algoritmi automatici sono stati sviluppati per rilevare automaticamente le crisi epilettiche. Scopo: lo scopo finale di questa ricerca è l'applicabilità e l'affidabilità di un dispositivo automatico portatile in grado di rilevare le convulsioni e utilizzabile come sistema di monitoraggio. L’analisi condotta in questo progetto, è eseguita con tecniche di misure classiche e avanzate, in modo tale da provare tecnicamente l’affidabilità di un tale sistema. La comparazione è stata eseguita sui segnali elettroencefalografici utilizzando due diversi sistemi di acquisizione EEG: il metodo standard utilizzato nelle cliniche e il nuovo dispositivo portatile. Metodi: è necessaria una solida validazione dei segnali EEG registrati con il nuovo dispositivo. I segnali saranno trattati con tecniche classiche e avanzate. Dopo le operazioni di pulizia e allineamento, verrà utilizzato un nuovo metodo di rappresentazione e confronto di segnali : Bump model. In questa tesi il metodo citato verrà ampiamente descritto, testato, validato e adattato alle esigenze del progetto. Questo modello è definito come un approccio economico per la mappatura spazio-frequenziale di wavelet; in particolare, saranno presenti solo gli eventi con un’alta quantità di energia. Risultati: il modello Bump è stato implementato come toolbox su MATLAB dallo sviluppatore F. Vialatte, e migliorato dall’Autore per l’utilizzo di registrazioni EEG da sistemi diversi. Il metodo è validato con segnali artificiali al fine di garantire l’affidabilità, inoltre, è utilizzato su segnali EEG processati e allineati, che contengono eventi epilettici. Questo serve per rilevare la somiglianza dei due sistemi di acquisizione. Conclusioni: i risultati visivi garantiscono la somiglianza tra i due sistemi, questa differenza la si può notare specialmente comparando i grafici di attività background EEG e quelli di artefatti o eventi epilettici. Bump model è uno strumento affidabile per questa applicazione, e potrebbe essere utilizzato anche per lavori futuri (ad esempio utilizzare il metodo di Sincronicità Eventi Stocas- tici SES) o differenti applicazioni, così come le informazioni estratte dai Bump model potrebbero servire come input per misure di sincronicità, dalle quali estrarre utili risultati.
Resumo:
Tracking activities during daily life and assessing movement parameters is essential for complementing the information gathered in confined environments such as clinical and physical activity laboratories for the assessment of mobility. Inertial measurement units (IMUs) are used as to monitor the motion of human movement for prolonged periods of time and without space limitations. The focus in this study was to provide a robust, low-cost and an unobtrusive solution for evaluating human motion using a single IMU. First part of the study focused on monitoring and classification of the daily life activities. A simple method that analyses the variations in signal was developed to distinguish two types of activity intervals: active and inactive. Neural classifier was used to classify active intervals; the angle with respect to gravity was used to classify inactive intervals. Second part of the study focused on extraction of gait parameters using a single inertial measurement unit (IMU) attached to the pelvis. Two complementary methods were proposed for gait parameters estimation. First method was a wavelet based method developed for the estimation of gait events. Second method was developed for estimating step and stride length during level walking using the estimations of the previous method. A special integration algorithm was extended to operate on each gait cycle using a specially designed Kalman filter. The developed methods were also applied on various scenarios. Activity monitoring method was used in a PRIN’07 project to assess the mobility levels of individuals living in a urban area. The same method was applied on volleyball players to analyze the fitness levels of them by monitoring their daily life activities. The methods proposed in these studies provided a simple, unobtrusive and low-cost solution for monitoring and assessing activities outside of controlled environments.
Resumo:
Es soll eine Dichtefunktion geschätzt werden unter der Modellannahme, dass diese in einer geeigneten Besovklasse liegt und kompakten Träger hat. Hierzu wird ein Waveletschätzer TW näher untersucht, der Thresholding-Methoden verwendet. Es wird die asymptotische Konvergenzgeschwindigkeit von TW für eine große Zahl von Beobachtungen angegeben und bewiesen. Schließlich werden in einem Überblick weitere Waveletschätzer diskutiert und mit TW verglichen. Es zeigt sich, dass TW in vielen Modellannahmen die optimale Konvergenzrate erreicht.
Resumo:
Procedures for quantitative walking analysis include the assessment of body segment movements within defined gait cycles. Recently, methods to track human body motion using inertial measurement units have been suggested. It is not known if these techniques can be readily transferred to clinical measurement situations. This work investigates the aspects necessary for one inertial measurement unit mounted on the lower back to track orientation, and determine spatio-temporal features of gait outside the confines of a conventional gait laboratory. Apparent limitations of different inertial sensors can be overcome by fusing data using methods such as a Kalman filter. The benefits of optimizing such a filter for the type of motion are unknown. 3D accelerations and 3D angular velocities were collected for 18 healthy subjects while treadmill walking. Optimization of Kalman filter parameters improved pitch and roll angle estimates when compared to angles derived using stereophotogrammetry. A Weighted Fourier Linear Combiner method for estimating 3D orientation angles by constructing an analytical representation of angular velocities and allowing drift free integration is also presented. When tested this method provided accurate estimates of 3D orientation when compared to stereophotogrammetry. Methods to determine spatio-temporal features from lower trunk accelerations generally require knowledge of sensor alignment. A method was developed to estimate the instants of initial and final ground contact from accelerations measured by a waist mounted inertial device without rigorous alignment. A continuous wavelet transform method was used to filter and differentiate the signal and derive estimates of initial and final contact times. The technique was tested with data recorded for both healthy and pathologic (hemiplegia and Parkinson’s disease) subjects and validated using an instrumented mat. The results show that a single inertial measurement unit can assist whole body gait assessment however further investigation is required to understand altered gait timing in some pathological subjects.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
Atmosphärische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschätzen. Partikel können hinsichtlich ihrer Größe, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es möglich die Größe und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphärischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen möglichst weiten Partikelgrößenbereich (80 nm - 3 µm) in das SPLAT zu transferieren und zu einem feinen Strahl zu bündeln. Eine neue Beschreibung für die Beziehung der Partikelgröße zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Größe < 100 nm erfasst werden können. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der höchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Großteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelösung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthält einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrückung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfügung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Möglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abläuft und die Ergebnisse eine höhere Qualität aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fähigkeiten von CRISP konnten anhand der gewonnen Datensätze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphärischen Aerosols betrieben zu werden, während CRISP eine schnelle und gezielte Auswertung der Daten ermöglicht.
Resumo:
Autism Spectrum Disorders (ASDs) describe a set of neurodevelopmental disorders. ASD represents a significant public health problem. Currently, ASDs are not diagnosed before the 2nd year of life but an early identification of ASDs would be crucial as interventions are much more effective than specific therapies starting in later childhood. To this aim, cheap an contact-less automatic approaches recently aroused great clinical interest. Among them, the cry and the movements of the newborn, both involving the central nervous system, are proposed as possible indicators of neurological disorders. This PhD work is a first step towards solving this challenging problem. An integrated system is presented enabling the recording of audio (crying) and video (movements) data of the newborn, their automatic analysis with innovative techniques for the extraction of clinically relevant parameters and their classification with data mining techniques. New robust algorithms were developed for the selection of the voiced parts of the cry signal, the estimation of acoustic parameters based on the wavelet transform and the analysis of the infant’s general movements (GMs) through a new body model for segmentation and 2D reconstruction. In addition to a thorough literature review this thesis presents the state of the art on these topics that shows that no studies exist concerning normative ranges for newborn infant cry in the first 6 months of life nor the correlation between cry and movements. Through the new automatic methods a population of control infants (“low-risk”, LR) was compared to a group of “high-risk” (HR) infants, i.e. siblings of children already diagnosed with ASD. A subset of LR infants clinically diagnosed as newborns with Typical Development (TD) and one affected by ASD were compared. The results show that the selected acoustic parameters allow good differentiation between the two groups. This result provides new perspectives both diagnostic and therapeutic.