934 resultados para Random Scission
Resumo:
High-frequency seismograms contain features that reflect the random inhomogeneities of the earth. In this work I use an imaging method to locate the high contrast small- scale heterogeneity respect to the background earth medium. This method was first introduced by Nishigami (1991) and than applied to different volcanic and tectonically active areas (Nishigami, 1997, Nishigami, 2000, Nishigami, 2006). The scattering imaging method is applied to two volcanic areas: Campi Flegrei and Mt. Vesuvius. Volcanic and seismological active areas are often characterized by complex velocity structures, due to the presence of rocks with different elastic properties. I introduce some modifications to the original method in order to make it suitable for small and highly complex media. In particular, for very complex media the single scattering approximation assumed by Nishigami (1991) is not applicable as the mean free path becomes short. The multiple scattering or diffusive approximation become closer to the reality. In this thesis, differently from the ordinary Nishigami’s method (Nishigami, 1991), I use the mean of the recorded coda envelope as reference curve and calculate the variations from this average envelope. In this way I implicitly do not assume any particular scattering regime for the "average" scattered radiation, whereas I consider the variations as due to waves that are singularly scattered from the strongest heterogeneities. The imaging method is applied to a relatively small area (20 x 20 km), this choice being justified by the small length of the analyzed codas of the low magnitude earthquakes. I apply the unmodified Nishigami’s method to the volcanic area of Campi Flegrei and compare the results with the other tomographies done in the same area. The scattering images, obtained with frequency waves around 18 Hz, show the presence of high scatterers in correspondence with the submerged caldera rim in the southern part of the Pozzuoli bay. Strong scattering is also found below the Solfatara crater, characterized by the presence of densely fractured, fluid-filled rocks and by a strong thermal anomaly. The modified Nishigami’s technique is applied to the Mt. Vesuvius area. Results show a low scattering area just below the central cone and a high scattering area around it. The high scattering zone seems to be due to the contrast between the high rigidity body located beneath the crater and the low rigidity materials located around it. The central low scattering area overlaps the hydrothermal reservoirs located below the central cone. An interpretation of the results in terms of geological properties of the medium is also supplied, aiming to find a correspondence of the scattering properties and the geological nature of the material. A complementary result reported in this thesis is that the strong heterogeneity of the volcanic medium create a phenomenon called "coda localization". It has been verified that the shape of the seismograms recorded from the stations located at the top of the volcanic edifice of Mt. Vesuvius is different from the shape of the seismograms recorded at the bottom. This behavior is justified by the consideration that the coda energy is not uniformly distributed within a region surrounding the source for great lapse time.
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
[EN]In the framework of the European Higher Education Area, the assessment has been one of the most important aspects considered. In the Spanish Universities, one of the main differences with regard to the previous system is the incorporation of the continuous assessment to the evaluation process that is understood in several ways depending on the Universities, the courses and the lecturers. Focus on our context, a course of Mathematics of the first academic year in the Faculty of Business Administration at the University of Las Palmas de Gran Canaria (Spain), the continuous assessment has brought the preparation of a large amount of different tests to evaluate the students enrolled in it, therefore the incorporation of new tools and skills in order to make the teaching–learning process easier and more dynamic has become a need mainly in those degrees with a large number of students as the case we consider here. In this work we provide an efficient and effective way to elaborate random multiple-choice examina tion tests (although essay exams are also possible) by using Mathematica package and LATEXin order to make easier to the lectures the preparation of a large number of mid-term tests for a large number of students.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.
Resumo:
L’impacchettamento risulta essere importante in molti settori industriali, come il settore minerario, farmaceutico e soprattutto il settore spaziale, in quanto permette di massimizzare il grado di riempimento del propellente solido di un razzo ottenendo prestazioni migliori e notevoli vantaggi economici. Il lavoro di tesi presentato nel seguente elaborato consiste nello studio dell’impacchettamento casuale, in particolare il caso Random Close Packing, di un propellente solido; per fare ciò è stato implementato un codice in ambiente C++ presso l’hangar della Scuola di Ingegneria ed Architettura con sede a Forlì. L’obiettivo principale era quello di trovare la granulometria delle particelle di perclorato di ammonio e delle particelle di alluminio tali da minimizzare gli spazi lasciati vuoti dalle particelle stesse.
Resumo:
This thesis deals with three different physical models, where each model involves a random component which is linked to a cubic lattice. First, a model is studied, which is used in numerical calculations of Quantum Chromodynamics.In these calculations random gauge-fields are distributed on the bonds of the lattice. The formulation of the model is fitted into the mathematical framework of ergodic operator families. We prove, that for small coupling constants, the ergodicity of the underlying probability measure is indeed ensured and that the integrated density of states of the Wilson-Dirac operator exists. The physical situations treated in the next two chapters are more similar to one another. In both cases the principle idea is to study a fermion system in a cubic crystal with impurities, that are modeled by a random potential located at the lattice sites. In the second model we apply the Hartree-Fock approximation to such a system. For the case of reduced Hartree-Fock theory at positive temperatures and a fixed chemical potential we consider the limit of an infinite system. In that case we show the existence and uniqueness of minimizers of the Hartree-Fock functional. In the third model we formulate the fermion system algebraically via C*-algebras. The question imposed here is to calculate the heat production of the system under the influence of an outer electromagnetic field. We show that the heat production corresponds exactly to what is empirically predicted by Joule's law in the regime of linear response.
Resumo:
This work has mainly focused on the poly (L-lactide) (PLLA) which is a material for multiple applications with performances comparable to those of petrochemical polymers (PP, PS, PET, etc. ...), readily recyclable and also compostable. However, PLLA has certain shortcomings that limit its applications. It is a brittle, hard polymer with a very low elongation at break, hydrophobic, exhibits low crystallization kinetics and takes a long time to degrade. The properties of PLLA may be modified by copolymerization (random, block, and graft) of L-lactide monomers with other co-monomers. In this thesis it has been studied the crystallization and morphology of random copolymers poly (L-lactide-ran-ε-caprolactone) with different compositions of the two monomers since the physical, mechanical, optical and chemical properties of a material depend on this behavior. Thermal analyses were performed by differential scanning calorimetry (DSC) and thermogravimetry (TGA) to observe behaviors due to the different compositions of the copolymers. The crystallization kinetics and morphology of poly (L-lactide-ran-ε-caprolactone) was investigated by polarized light optical microscopy (PLOM) and differential scanning calorimetry (DSC). Their thermal behavior was observed with crystallization from melt. It was observed that with increasing amounts of PCL in the copolymer, there is a decrease of the thermal degradation. Studies on the crystallization kinetics have shown that small quantities of PCL in the copolymer increase the overall crystallization kinetics and the crystal growth rate which decreases with higher quantities of PCL.
Resumo:
L'obiettivo di questo lavoro di tesi è quello di implementare un codice di calcolo, attraverso l'algoritmo di Lubachevsky-Stillinger, in modo da poter prevedere la frazione volumetrica occupata dalle particelle solide che costituiscono il grain negli endoreattori a propellente solido. Particolare attenzione verrà rivolta al problema dell'impacchettamento sferico random (Random-Close Packing) che tale algoritmo cerca di modellare, e le ipotesi per cui tale modellazione può essere applicata al tipo di problema proposto. Inoltre saranno descritte le procedure effettuate per l'ottenimento dei risultati numerici delle simulazioni e la loro motivazione, oltre ai limiti del modello utilizzato e alle migliorie apportate per un'esecuzione più efficiente e veloce.
Resumo:
In questo elaborato si affronta il progetto di un nucleo di calcolo per misure d'impedenza sulla pelle tramite l'utilizzo di segnali pseudo-random. La misura viene effettuata applicando il segnale casuale all'impedenza per ottenere la risposta impulsiva tramite un'operazione di convoluzione. Il nucleo di calcolo è stato implementato in VHDL.
Resumo:
In questa tesi si è studiato l’insorgere di eventi critici in un semplice modello neurale del tipo Integrate and Fire, basato su processi dinamici stocastici markoviani definiti su una rete. Il segnale neurale elettrico è stato modellato da un flusso di particelle. Si è concentrata l’attenzione sulla fase transiente del sistema, cercando di identificare fenomeni simili alla sincronizzazione neurale, la quale può essere considerata un evento critico. Sono state studiate reti particolarmente semplici, trovando che il modello proposto ha la capacità di produrre effetti "a cascata" nell’attività neurale, dovuti a Self Organized Criticality (auto organizzazione del sistema in stati instabili); questi effetti non vengono invece osservati in Random Walks sulle stesse reti. Si è visto che un piccolo stimolo random è capace di generare nell’attività della rete delle fluttuazioni notevoli, in particolar modo se il sistema si trova in una fase al limite dell’equilibrio. I picchi di attività così rilevati sono stati interpretati come valanghe di segnale neurale, fenomeno riconducibile alla sincronizzazione.
Resumo:
In most real-life environments, mechanical or electronic components are subjected to vibrations. Some of these components may have to pass qualification tests to verify that they can withstand the fatigue damage they will encounter during their operational life. In order to conduct a reliable test, the environmental excitations can be taken as a reference to synthesize the test profile: this procedure is referred to as “test tailoring”. Due to cost and feasibility reasons, accelerated qualification tests are usually performed. In this case, the duration of the original excitation which acts on the component for its entire life-cycle, typically hundreds or thousands of hours, is reduced. In particular, the “Mission Synthesis” procedure lets to quantify the induced damage of the environmental vibration through two functions: the Fatigue Damage Spectrum (FDS) quantifies the fatigue damage, while the Maximum Response Spectrum (MRS) quantifies the maximum stress. Then, a new random Power Spectral Density (PSD) can be synthesized, with same amount of induced damage, but a specified duration in order to conduct accelerated tests. In this work, the Mission Synthesis procedure is applied in the case of so-called Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic contributions, in the form of sine tones typically due to some rotating parts of the system (e.g. helicopters, engine-mounted components, …). In fact, a proper test tailoring should not only preserve the accumulated fatigue damage, but also the “nature” of the excitation (in this case the sinusoidal components superimposed on the random process) in order to obtain reliable results. The classic time-domain approach is taken as a reference for the comparison of different methods for the FDS calculation in presence of Sine-on-Random vibrations. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is presented.
Resumo:
This thesis investigates one-dimensional random walks in random environment whose transition probabilities might have an infinite variance. The ergodicity of the dynamical system ''from the point of view of the particle'' is proved under the assumptions of transitivity and existence of an absolutely continuous steady state on the space of the environments. We show that, if the average of the local drift over the environments is summable and null, then the RWRE is recurrent. We provide an example satisfying all the hypotheses.
Resumo:
In this thesis we dealt with the problem of describing a transportation network in which the objects in movement were subject to both finite transportation capacity and finite accomodation capacity. The movements across such a system are realistically of a simultaneous nature which poses some challenges when formulating a mathematical description. We tried to derive such a general modellization from one posed on a simplified problem based on asyncronicity in particle transitions. We did so considering one-step processes based on the assumption that the system could be describable through discrete time Markov processes with finite state space. After describing the pre-established dynamics in terms of master equations we determined stationary states for the considered processes. Numerical simulations then led to the conclusion that a general system naturally evolves toward a congestion state when its particle transition simultaneously and we consider one single constraint in the form of network node capacity. Moreover the congested nodes of a system tend to be located in adjacent spots in the network, thus forming local clusters of congested nodes.
Resumo:
We propose a new and clinically oriented approach to perform atlas-based segmentation of brain tumor images. A mesh-free method is used to model tumor-induced soft tissue deformations in a healthy brain atlas image with subsequent registration of the modified atlas to a pathologic patient image. The atlas is seeded with a tumor position prior and tumor growth simulating the tumor mass effect is performed with the aim of improving the registration accuracy in case of patients with space-occupying lesions. We perform tests on 2D axial slices of five different patient data sets and show that the approach gives good results for the segmentation of white matter, grey matter, cerebrospinal fluid and the tumor.