904 resultados para Cross-correlation function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The southern Apennines of Italy have been experienced several destructive earthquakes both in historic and recent times. The present day seismicity, characterized by small-to-moderate magnitude earthquakes, was used like a probe to obatin a deeper knowledge of the fault structures where the largest earthquakes occurred in the past. With the aim to infer a three dimensional seismic image both the problem of data quality and the selection of a reliable and robust tomographic inversion strategy have been faced. The data quality has been obtained to develop optimized procedures for the measurements of P- and S-wave arrival times, through the use of polarization filtering and to the application of a refined re-picking technique based on cross-correlation of waveforms. A technique of iterative tomographic inversion, linearized, damped combined with a strategy of multiscale inversion type has been adopted. The retrieved P-wave velocity model indicates the presence of a strong velocity variation along a direction orthogonal to the Apenninic chain. This variation defines two domains which are characterized by a relatively low and high velocity values. From the comparison between the inferred P-wave velocity model with a portion of a structural section available in literature, the high velocity body was correlated with the Apulia carbonatic platforms whereas the low velocity bodies was associated to the basinal deposits. The deduced Vp/Vs ratio shows that the ratio is lower than 1.8 in the shallower part of the model, while for depths ranging between 5 km and 12 km the ratio increases up to 2.1 in correspondence to the area of higher seismicity. This confirms that areas characterized by higher values are more prone to generate earthquakes as a response to the presence of fluids and higher pore-pressures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We obtain the exact time-dependent Kohn-Sham potentials Vks for 1D Hubbard chains, driven by a d.c. external field, using the time-dependent electron density and current density obtained from exact many-body time-evolution. The exact Vxc is compared to the adiabatically-exact Vad-xc and the “instantaneous ground state” Vigs-xc. The effectiveness of these two approximations is analyzed. Approximations for the exchange-correlation potential Vxc and its gradient, based on the local density and on the local current density, are also considered and both physical quantities are observed to be far outside the reach of any possible local approximation. Insight into the respective roles of ground-state and excited-state correlation in the time-dependent system, as reflected in the potentials, is provided by the pair correlation function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nel primo capitolo viene introdotto lo studio eff�ettuato e descritto un metodo di misure successivo alla caratterizzazione della super�ficie. Nel secondo capitolo vengono descritti i campioni analizzati e, nello speci�fico, la crescita attraverso MaCE dei nanofi�li di silicio. Nel terzo capitolo viene descritto lo strumento AFM utilizzato e la teoria della caratterizzazione alla base dello studio condotto. Nella quarta sezione vengono descritti i risultati ottenuti mentre nelle conclusioni viene tratto il risultato dei valori ottenuti di RMS roughness e roughness exponent.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The interplay of hydrodynamic and electrostatic forces is of great importance for the understanding of colloidal dispersions. Theoretical descriptions are often based on the so called standard electrokinetic model. This Mean Field approach combines the Stokes equation for the hydrodynamic flow field, the Poisson equation for electrostatics and a continuity equation describing the evolution of the ion concentration fields. In the first part of this thesis a new lattice method is presented in order to efficiently solve the set of non-linear equations for a charge-stabilized colloidal dispersion in the presence of an external electric field. Within this framework, the research is mainly focused on the calculation of the electrophoretic mobility. Since this transport coefficient is independent of the electric field only for small driving, the algorithm is based upon a linearization of the governing equations. The zeroth order is the well known Poisson-Boltzmann theory and the first order is a coupled set of linear equations. Furthermore, this set of equations is divided into several subproblems. A specialized solver for each subproblem is developed, and various tests and applications are discussed for every particular method. Finally, all solvers are combined in an iterative procedure and applied to several interesting questions, for example, the effect of the screening mechanism on the electrophoretic mobility or the charge dependence of the field-induced dipole moment and ion clouds surrounding a weakly charged sphere. In the second part a quantitative data analysis method is developed for a new experimental approach, known as "Total Internal Reflection Fluorescence Cross-Correlation Spectroscopy" (TIR-FCCS). The TIR-FCCS setup is an optical method using fluorescent colloidal particles to analyze the flow field close to a solid-fluid interface. The interpretation of the experimental results requires a theoretical model, which is usually the solution of a convection-diffusion equation. Since an analytic solution is not available due to the form of the flow field and the boundary conditions, an alternative numerical approach is presented. It is based on stochastic methods, i. e. a combination of a Brownian Dynamics algorithm and Monte Carlo techniques. Finally, experimental measurements for a hydrophilic surface are analyzed using this new numerical approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A unique characteristic of soft matter is its ability to self-assemble into larger structures. Characterizing these structures is crucial for their applications. In the first part of this work, I investigated DNA-organic hybrid material by means of Fluorescence Correlation Spectroscopy (FCS) and Fluorescence Cross-Correlation Spectroscopy (FCCS). DNA-organic hybrid materials, a novel class of hybrid materials composed of synthetic macromolecules and oligodeoxynucleotide segmenta, are mostly amphiphilic and can self-assemble into supramolecular structures in aqueous solution. A hybrid material of a fluorophore, perylenediimide (PDI), and a DNA segment (DNA-PDI) has been developed in Prof. A. Hermann’s group (University of Groningen). This novel material has the ability to form aggregates through pi-pi stacking between planar PDIs and can be traced in solution due to the fluorescence of PDI. I have determined the diffusion coefficient of DNA-PDI conjugates in aqueous solution by means of FCS. In addition, I investigated whether such DNA-PDIs form aggregates with certain structure, for instance dimers. rnOnce the DNA hybrid material self-assemble into supermolecular structures for instance into micelles, the single molecules do not necessarily stay in one specific micelle. Actually, a single molecule may enter and leave micelles constantly. The average residence time of a single molecule in a certain micelle depends on the nature of the molecule. I have chosen DNA-b-polypropylene oxide (PPO) as model molecules and investigated the residence time of DNA-b-PPO molecules in their according micelles by means of FCCS.rnBesides the DNA hybrid materials, polymeric colloids can also form ordered structures once they are brought to an air/water interface. Here, hexagonally densely packed monolayers can be generated. These monolayers can be deposited onto different surfaces as coating layers. In the second part of this work, I investigated the mechanical properties of such colloidal monolayers using micromechanical cantilevers. When a coating layer is deposited on a cantilever, it can modify the elasticity of the cantilever. This variation can be reflected either by a deflection or by a resonance frequency shift of the cantilever. In turn, detecting these changes provides information about the mechanical properties of the coating layer. rnIn the second part of this work, polymeric colloidal monolayers were coated on a cantilever and homogenous polymer films of a few hundred nanometers in thickness were generated from these colloidal monolayers by thermal annealing or organic vapor annealing. Both the film formation process and the mechanical properties of these resulting homogenous films were investigated by means of cantilever. rnElastic property changes of the coating film, for example upon absorption of organic vapors, induce a deflection of the cantilever. This effect enables a cantilever to detect target molecules, when the cantilever is coated with an active layer with specific affinity to target molecules. In the last part of this thesis, I investigated the applicability of suitably functionalized micromechanical cantilevers as sensors. In particular, glucose sensitive polymer brushes were grafted on a cantilever and the deflection of this cantilever was measured during exposure to glucose solution. rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Un LiDAR è uno strumento di misura che sta vedendo uno sviluppo enorme negli ultimi decenni e sta dando risultati di grande utilità pratica. Abbiamo svolto alcune misure di distanza utilizzando uno strumento realizzato con materiale di recupero e un semplice software scritto da noi. In una prima parte del lavoro, più teorica, si illustrerà il funzionamento dello strumen- to, basato sull’invio di fasci laser su bersagli opachi e sulla ricezione della loro riflessione. Si presterà particolare attenzione ai metodi sviluppati per poter sfruttare laser continui piuttosto che impulsati, che risulterebbero più costosi: le sequenze pseudocasuali di bit. Nella parte sperimentale invece si mostrerà l’analisi dei dati effettuata e si commen- teranno i risultati ottenuti osservando le misure, con lo scopo di verificare alcune ipotesi, fra cui si darà particolare attenzione al confronto delle diverse sequenze. Lo scopo di questo lavoro è caratterizzare lo strumento tramite l’analisi delle misure e verificare l’asserzione dell’articolo [1] in bibliografia secondo cui particolari sequenze di bit (A1 e A2) darebbero risultati migliori se utilizzate al posto della sequenza pseudocasuale di lunghezza massima, M-sequence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Negli anni la funzione dei social network è cambiata molte volte. Alle origini i social network erano uno strumento di connessione tra amici, ora sono siti internet in cui le persone mettono informazioni e quando un social network ha milioni di utenti, diventa un’incredibile sorgente di dati. Twitter è uno dei siti internet più visitati, e viene descritto come “the SMS of internet”, perchè è un social network che permette ai suoi utenti di inviare e leggere messaggi corti, di 140 caratteri, chiamati “tweets”. Con il passare del tempo Twitter `e diventato una fonte fondamentale di notizie. Il suo grande numero di utenti permette alle notizie di espandersi nella rete in modo virale. Molte persone hanno cercato di analizzare il potere dei tweet, come il contenuto positivo o negativo, mentre altri hanno cercato di capire se avessero un potere predittivo. In particolare nel mondo finanziario, sono state avviate molte ricerche per verificare l’esistenza di una effettiva correlazione tra i tweets e la fluttuazione del mercato azionario. L’effettiva presenza di tale relazione unita a un modello predittivo, potrebbe portare allo sviluppo di un modello che analizzando i tweets presenti nella rete, relativi a un titolo azionario, dia informazioni sulle future variazioni del titolo stesso. La nostra attenzione si è rivolata alla ricerca e validazione statistica di tale correlazione. Sono stati effettuati test su singole azioni, sulla base dei dati disponibili, poi estesi a tutto il dataset per vedere la tendenza generale e attribuire maggior valore al risultato. Questa ricerca è caratterizzata dal suo dataset di tweet che analizza un periodo di oltre 2 anni, uno dei periodi più lunghi mai analizzati. Si è cercato di fornire maggior valore ai risultati trovati tramite l’utilizzo di validazioni statistiche, come il “permutation test”, per validare la relazione tra tweets di un titolo con i relativi valori azionari, la rimozione di una percentuale di eventi importanti, per mostrare la dipendenza o indipendenza dei dati dagli eventi più evidenti dell’anno e il “granger causality test”, per capire la direzione di una previsione tra serie. Sono stati effettuati anche test con risultati fallimentari, dai quali si sono ricavate le direzioni per i futuri sviluppi di questa ricerca.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il problema dell'acidificazione degli oceani, conseguente ai cambiamenti climatici, è un processo ancora poco conosciuto. Per comprendere questo fenomeno, possono essere utilizzati degli ambienti naturalmente acidificati, considerati laboratori a cielo aperto. Lo scopo di questo lavoro di tesi è stato quello di utilizzare le fumarole presenti nell'isola di Ischia, per approfondire le dinamiche dei processi di acidificazione e per analizzare l'eventuale interazione tra pH e condizioni meteorologiche. I dati utilizzati, forniti dalla Stazione Zoologica “Anton Dohrn” di Napoli, erano serie di pH e di vento rilevate in continuo, in due aree, nord e sud rispetto all'isolotto del Castello Aragonese, e in tre stazioni lungo un gradiente di acidificazione. Tutto il lavoro è stato svolto a step, dove il risultato di un'analisi suggeriva il tipo e il metodo analitico da utilizzare nelle analisi successive. Inizialmente i dati delle due serie sono stati analizzati singolarmente per ottenere i parametri più salienti delle due serie. In seguito i dati sono stati correlati fra loro per stimare l'influenza del vento sul pH. Globalmente è stato possibile evidenziare come il fenomeno dell'acidificazione sia correlato con il vento, ma la risposta sembra essere sito-specifica, essendo risultato dipendente da altri fattori interagenti a scala locale, come la geomorfologia del territorio, le correnti marine e la batimetria del fondale. È però emersa anche la difficoltà nel trovare chiare correlazioni fra le due serie indagate, perché molto complesse, a causa sia della numerosa quantità di zeri nella serie del vento, sia da una forte variabilità naturale del pH, nelle varie stazioni esaminate. In generale, con questo lavoro si è dimostrato come utilizzare tecniche di analisi delle serie storiche, e come poter utilizzare metodi di regressione, autocorrelazione, cross-correlation e smoothing che possono integrare i modelli che prendono in considerazione variabili esogene rispetto alla variabile di interesse.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An algorithm for the real-time registration of a retinal video sequence captured with a scanning digital ophthalmoscope (SDO) to a retinal composite image is presented. This method is designed for a computer-assisted retinal laser photocoagulation system to compensate for retinal motion and hence enhance the accuracy, speed, and patient safety of retinal laser treatments. The procedure combines intensity and feature-based registration techniques. For the registration of an individual frame, the translational frame-to-frame motion between preceding and current frame is detected by normalized cross correlation. Next, vessel points on the current video frame are identified and an initial transformation estimate is constructed from the calculated translation vector and the quadratic registration matrix of the previous frame. The vessel points are then iteratively matched to the segmented vessel centerline of the composite image to refine the initial transformation and register the video frame to the composite image. Criteria for image quality and algorithm convergence are introduced, which assess the exclusion of single frames from the registration process and enable a loss of tracking signal if necessary. The algorithm was successfully applied to ten different video sequences recorded from patients. It revealed an average accuracy of 2.47 ± 2.0 pixels (∼23.2 ± 18.8 μm) for 2764 evaluated video frames and demonstrated that it meets the clinical requirements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Currently, a variety of linear and nonlinear measures is in use to investigate spatiotemporal interrelation patterns of multivariate time series. Whereas the former are by definition insensitive to nonlinear effects, the latter detect both nonlinear and linear interrelation. In the present contribution we employ a uniform surrogate-based approach, which is capable of disentangling interrelations that significantly exceed random effects and interrelations that significantly exceed linear correlation. The bivariate version of the proposed framework is explored using a simple model allowing for separate tuning of coupling and nonlinearity of interrelation. To demonstrate applicability of the approach to multivariate real-world time series we investigate resting state functional magnetic resonance imaging (rsfMRI) data of two healthy subjects as well as intracranial electroencephalograms (iEEG) of two epilepsy patients with focal onset seizures. The main findings are that for our rsfMRI data interrelations can be described by linear cross-correlation. Rejection of the null hypothesis of linear iEEG interrelation occurs predominantly for epileptogenic tissue as well as during epileptic seizures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

WE STUDIED THE EMOTIONAL RESPONSES BY MUSICIANS to familiar classical music excerpts both when the music was sounded, and when it was imagined.We used continuous response methodology to record response profiles for the dimensions of valence and arousal simultaneously and then on the single dimension of emotionality. The response profiles were compared using cross-correlation analysis, and an analysis of responses to musical feature turning points, which isolate instances of change in musical features thought to influence valence and arousal responses. We found strong similarity between the use of an emotionality arousal scale across the stimuli, regardless of condition (imagined or sounded). A majority of participants were able to create emotional response profiles while imagining the music, which were similar in timing to the response profiles created while listening to the sounded music.We conclude that similar mechanisms may be involved in the processing of emotion in music when the music is sounded and when imagined.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two-particle correlations in relative azimuthal angle (Delta phi) and pseudorapidity (Delta eta) are measured in root S-NN = 5.02 TeV p + Pb collisions using the ATLAS detector at the LHC. The measurements are performed using approximately 1 mu b(-1) of data as a function of transverse momentum (p(T)) and the transverse energy (Sigma E-T(Pb)) summed over 3.1 < eta < 4.9 in the direction of the Pb beam. The correlation function, constructed from charged particles, exhibits a long-range (2 < vertical bar Delta eta vertical bar < 5) "near-side" (Delta phi similar to 0) correlation that grows rapidly with increasing Sigma E-T(Pb). A long-range "away-side" (Delta phi similar to pi) correlation, obtained by subtracting the expected contributions from recoiling dijets and other sources estimated using events with small Sigma E-T(Pb), is found to match the near-side correlation in magnitude, shape (in Delta eta and Delta phi) and Sigma E-T(Pb) dependence. The resultant Delta phi correlation is approximately symmetric about pi/2, and is consistent with a dominant cos2 Delta phi modulation for all Sigma E-T(Pb) ranges and particle p(T).