20 resultados para Time-invariant Wavelet Analysis
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Introduction: Nocturnal frontal lobe epilepsy (NFLE) is a distinct syndrome of partial epilepsy whose clinical features comprise a spectrum of paroxysmal motor manifestations of variable duration and complexity, arising from sleep. Cardiovascular changes during NFLE seizures have previously been observed, however the extent of these modifications and their relationship with seizure onset has not been analyzed in detail. Objective: Aim of present study is to evaluate NFLE seizure related changes in heart rate (HR) and in sympathetic/parasympathetic balance through wavelet analysis of HR variability (HRV). Methods: We evaluated the whole night digitally recorded video-polysomnography (VPSG) of 9 patients diagnosed with NFLE with no history of cardiac disorders and normal cardiac examinations. Events with features of NFLE seizures were selected independently by three examiners and included in the study only if a consensus was reached. Heart rate was evaluated by measuring the interval between two consecutive R-waves of QRS complexes (RRi). RRi series were digitally calculated for a period of 20 minutes, including the seizures and resampled at 10 Hz using cubic spline interpolation. A multiresolution analysis was performed (Daubechies-16 form), and the squared level specific amplitude coefficients were summed across appropriate decomposition levels in order to compute total band powers in bands of interest (LF: 0.039062 - 0.156248, HF: 0.156248 - 0.624992). A general linear model was then applied to estimate changes in RRi, LF and HF powers during three different period (Basal) (30 sec, at least 30 sec before seizure onset, during which no movements occurred and autonomic conditions resulted stationary); pre-seizure period (preSP) (10 sec preceding seizure onset) and seizure period (SP) corresponding to the clinical manifestations. For one of the patients (patient 9) three seizures associated with ictal asystole were recorded, hence he was treated separately. Results: Group analysis performed on 8 patients (41 seizures) showed that RRi remained unchanged during the preSP, while a significant tachycardia was observed in the SP. A significant increase in the LF component was instead observed during both the preSP and the SP (p<0.001) while HF component decreased only in the SP (p<0.001). For patient 9 during the preSP and in the first part of SP a significant tachycardia was observed associated with an increased sympathetic activity (increased LF absolute values and LF%). In the second part of the SP a progressive decrease in HR that gradually exceeded basal values occurred before IA. Bradycardia was associated with an increase in parasympathetic activity (increased HF absolute values and HF%) contrasted by a further increase in LF until the occurrence of IA. Conclusions: These data suggest that changes in autonomic balance toward a sympathetic prevalence always preceded clinical seizure onset in NFLE, even when HR changes were not yet evident, confirming that wavelet analysis is a sensitive technique to detect sudden variations of autonomic balance occurring during transient phenomena. Finally we demonstrated that epileptic asystole is associated with a parasympathetic hypertonus counteracted by a marked sympathetic activation.
Resumo:
The dynamics and geometry of the material inflowing and outflowing close to the supermassive black hole in active galactic nuclei are still uncertain. X-rays are the most suitable way to study the AGN innermost regions because of the Fe Kα emission line, a proxy of accretion, and Fe absorption lines produced by outflows. Winds are typically classified as Warm Absorbers (slow and mildly ionized) and Ultra Fast Outflows (fast and highly ionized). Transient Obscurers -optically thick winds that produce strong spectral hardening in X-rays, lasting from days to months- have been observed recently. Emission and absorption features vary on time-scales from hours to years, probing phenomena at different distances from the SMBH. In this work, we use time-resolved spectral analysis to investigate the accretion and ejection flows, to characterize them individually and search for correlations. We analyzed XMM-Newtomn data of a set of the brightest Seyfert 1 galaxies that went through an obscuration event: NGC 3783, NGC 3227, NGC 5548, and NGC 985. Our aim is to search for emission/absorption lines in short-duration spectra (∼ 10ks), to explore regions as close as the SMBH as the statistics allows for, and possibly catch transient phenomena. First we run a blind search to detect emission/absorption features, then we analyze their evolution with Residual Maps: we visualize simultaneously positive and negative residuals from the continuum in the time-energy plane, looking for patterns and relative time-scales. In NGC 3783 we were able to ascribe variations of the Fe Kα emission line to absorptions at the same energy due to clumps in the obscurer, whose presence is detected at >3σ, and to determine the size of the clumps. In NGC 3227 we detected a wind at ∼ 0.2c at ∼ 2σ, briefly appearing during an obscuration event.
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
Neisseria meningitidis (Nm) is the major cause of septicemia and meningococcal meningitis. During the course of infection, it must adapt to different host environments as a crucial factor for survival. Despite the severity of meningococcal sepsis, little is known about how Nm adapts to permit survival and growth in human blood. A previous time-course transcriptome analysis, using an ex vivo model of human whole blood infection, showed that Nm alters the expression of nearly 30% of ORFs of the genome: major dynamic changes were observed in the expression of transcriptional regulators, transport and binding proteins, energy metabolism, and surface-exposed virulence factors. Starting from these data, mutagenesis studies of a subset of up-regulated genes were performed and the mutants were tested for the ability to survive in human whole blood; Nm mutant strains lacking the genes encoding NMB1483, NalP, Mip, NspA, Fur, TbpB, and LctP were sensitive to killing by human blood. Then, the analysis was extended to the whole Nm transcriptome in human blood, using a customized 60-mer oligonucleotide tiling microarray. The application of specifically developed software combined with this new tiling array allowed the identification of different types of regulated transcripts: small intergenic RNAs, antisense RNAs, 5’ and 3’ untranslated regions and operons. The expression of these RNA molecules was confirmed by 5’-3’RACE protocol and specific RT-PCR. Here we describe the complete transcriptome of Nm during incubation in human blood; we were able to identify new proteins important for survival in human blood and also to identify additional roles of previously known virulence factors in aiding survival in blood. In addition the tiling array analysis demonstrated that Nm expresses a set of new transcripts, not previously identified, and suggests the presence of a circuit of regulatory RNA elements used by Nm to adapt to proliferate in human blood.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
The papers included in this thesis deal with a few aspects of insurance economics that have seldom been dealt with in the applied literature. In the first paper I apply for the first time the tools of the economics of crime to study the determinants of frauds, using data on Italian provinces. The contributions to the literature are manifold: -The price of insuring has a positive correlation with the propensity to defraud -Social norms constraint fraudulent behavior, but their strength is curtailed in economic downturns -I apply a simple extension of the Random Coefficient model, which allows for the presence of time invariant covariates and asymmetries in the impact of the regressors. The second paper assesses how the evolution of macro prudential regulation of insurance companies has been reflected in their equity price. I employ a standard event study methodology, deriving the definition of the “control” and “treatment” groups from what is implied by the regulatory framework. The main results are: -Markets care about the evolution of the legislation. Their perception has shifted from a first positive assessment of a possible implicit “too big to fail” subsidy to a more negative one related to its cost in terms of stricter capital requirement -The size of this phenomenon is positively related to leverage, size and on the geographical location of the insurance companies The third paper introduces a novel methodology to forecast non-life insurance premiums and profitability as function of macroeconomic variables, using the simultaneous equation framework traditionally employed macroeconometric models and a simple theoretical model of insurance pricing to derive a long term relationship between premiums, claims expenses and short term rates. The model is shown to provide a better forecast of premiums and profitability compared with the single equation specifications commonly used in applied analysis.
Resumo:
Landslides are common features of the landscape of the north-central Apennine mountain range and cause frequent damage to human facilities and infrastructure. Most of these landslides move periodically with moderate velocities and, only after particular rainfall events, some accelerate abruptly. Synthetic aperture radar interferometry (InSAR) provides a particularly convenient method for studying deforming slopes. We use standard two-pass interferometry, taking advantage of the short revisit time of the Sentinel-1 satellites. In this paper we present the results of the InSAR analysis developed on several study areas in central and Northern Italian Apennines. The aims of the work described within the articles contained in this paper, concern: i) the potential of the standard two-pass interferometric technique for the recognition of active landslides; ii) the exploration of the potential related to the displacement time series resulting from a two-pass multiple time-scale InSAR analysis; iii) the evaluation of the possibility of making comparisons with climate forcing for cognitive and risk assessment purposes. Our analysis successfully identified more than 400 InSAR deformation signals (IDS) in the different study areas corresponding to active slope movements. The comparison between IDSs and thematic maps allowed us to identify the main characteristics of the slopes most prone to landslides. The analysis of displacement time series derived from monthly interferometric stacks or single 6-day interferograms allowed the establishment of landslide activity thresholds. This information, combined with the displacement time series, allowed the relationship between ground deformation and climate forcing to be successfully investigated. The InSAR data also gave access to the possibility of validating geographical warning systems and comparing the activity state of landslides with triggering probability thresholds.
Resumo:
Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.
Resumo:
The present dissertation relates to methodologies and technics about industrial and mechanical design. The author intends to give a complete idea about the world of design, showing the theories of Quality Function Deployment and TRIZ, of other methods just like planning, budgeting, Value Analysis and Engineering, Concurrent Engineering, Design for Assembly and Manufactoring, etc., and their applications to five concrete cases. In these cases there are also illustrated design technics as CAD, CAS, CAM; Rendering, which are ways to transform an idea into reality. The most important object of the work is, however, the birth of a new methodology, coming up from a comparison between QFD and TRIZ and their integration through other methodologies, just like Time and Cost Analysis, learned and skilled during an important experience in a very famous Italian automotive factory.
Resumo:
Purpose: evaluation and comparison of volumetric modulated RapidarcTM radiotherapy (RA-IMRT) vs linac based Stereotactic body radiotherapy (SBRT) in the salvage treatment of isolated lymph node recurrences in patients affected by gynaecological cancer. Materials and Methods From January 2010 to September 2011, 15 patients affected by isolated lymph nodes recurrence of gynaecological cancer underwent salvage radiotherapy after conventional imaging staging with CT and 18-FDG-PET/CT. Two different radiotherapy techniques were used in this study: RA-IMRT (RapidarcTM implemented radiotherapy Varian Medical System, Palo Alto, CA, USA) or SBRT (BrainLAB, Feldkirchen, Germany). Five patients underwent CT scan and all patients underwent 18FDG-PET/CT for pre-treatment evaluation and staging. The mean total dose delivered was 54.3 Gy (range 50-60 Gy with conventional fractionation and 27.4 Gy (range 12-40 Gy hypofractionation) for RA-IMRT and SBRT respectively. The mean number of fractions was 27.6 fractions (range 25-31) and 3-4 fractions , the mean overall treatment duration was 40.5 days (range 36-45) and 6.5 days (range 5-8 days) for RA-IMRT and SBRT respectively. Results: At the time of the analysis, October 2011, the overall survival was 92.3 % (80% for RA-IMRT and 100% for SBRT). Six patients are alive with no evidence of disease and also six patients are alive with clinically evident disease in other sites (40% and 50% patients RA-IMRT vs SBRT respectively, one patient died for systemic progression of disease and two patient were not evaluable at this time. Conclusions: Our preliminary results showed that, the use of RA-IMRT and SBRT are an excellent local therapy for isolated lymph nodes recurrences of gynaecological cancer with a good toxicity profile and local control rate, even if any long term survivors would be expected. New treatment modalities like Cyberknife are also being implemented.
Resumo:
Food suppliers currently measure apple quality considering basic pomological descriptors. Sensory analysis is expensive, does not permit to analyse many samples, and cannot be implemented for measuring quality properties in real time. However, sensory analysis is the best way to precisely describe food eating quality, since it is able to define, measure, and explain what is really perceivable by human senses and using a language that closely reflects the consumers’ perception. On the basis of such observations, we developed a detailed protocol for apple sensory profiling by descriptive sensory analysis and instrumental measurements. The collected sensory data were validated by applying rigorous scientific criteria for sensory analysis. The method was then applied for studying sensory properties of apples and their changes in relation to different pre- and post-harvest factors affecting fruit quality, and demonstrated to be able to discriminate fruit varieties and to highlight differences in terms of sensory properties. The instrumental measurements confirmed such results. Moreover, the correlation between sensory and instrumental data was studied, and a new effective approach was defined for the reliable prediction of sensory properties by instrumental characterisation. It is therefore possible to propose the application of this sensory-instrumental tool to all the stakeholders involved in apple production and marketing, to have a reliable description of apple fruit quality.
Resumo:
The continuous and swift progression of both wireless and wired communication technologies in today's world owes its success to the foundational systems established earlier. These systems serve as the building blocks that enable the enhancement of services to cater to evolving requirements. Studying the vulnerabilities of previously designed systems and their current usage leads to the development of new communication technologies replacing the old ones such as GSM-R in the railway field. The current industrial research has a specific focus on finding an appropriate telecommunication solution for railway communications that will replace the GSM-R standard which will be switched off in the next years. Various standardization organizations are currently exploring and designing a radiofrequency technology based standard solution to serve railway communications in the form of FRMCS (Future Railway Mobile Communication System) to substitute the current GSM-R. Bearing on this topic, the primary strategic objective of the research is to assess the feasibility to leverage on the current public network technologies such as LTE to cater to mission and safety critical communication for low density lines. The research aims to identify the constraints, define a service level agreement with telecom operators, and establish the necessary implementations to make the system as reliable as possible over an open and public network, while considering safety and cybersecurity aspects. The LTE infrastructure would be utilized to transmit the vital data for the communication of a railway system and to gather and transmit all the field measurements to the control room for maintenance purposes. Given the significance of maintenance activities in the railway sector, the ongoing research includes the implementation of a machine learning algorithm to detect railway equipment faults, reducing time and human analysis errors due to the large volume of measurements from the field.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.