904 resultados para Cross-correlation function


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The sea surface temperature (SST) and chlorophyll-a concentration (CHL-a) were analysed in the Gulf of Tadjourah from two set of 8-day composite satellite data, respectively from 2008 to 2012 and from 2005 to 2011. A singular spectrum analysis (SSA) shows that the annual cycle of SST is strong (74.3% of variance) and consists of warming (April-October) and cooling (November-March) of about 2.5C than the long-term average. The semi-annual cycle captures only 14.6% of temperature variance and emphasises the drop of SST during July-August. Similarly, the annual cycle of CHL-a (29.7% of variance) depicts high CHL-a from June to October and low concentration from November to May. In addition, the first spatial empirical orthogonal function (EOF) of SST (93% of variance) shows that the seasonal warming/cooling is in phase across the whole study area but the southeastern part always remaining warmer or cooler. In contrast to the SST, the first EOF of CHL-a (54.1% of variance) indicates the continental shelf in phase opposition with the offshore area in winter during which the CHL-a remains sequestrated in the coastal area particularly in the south-east and in the Ghoubet Al-Kharab Bay. Inversely during summer, higher CHL-a quantities appear in the offshore waters. In order to investigate processes generating these patterns, a multichannel spectrum analysis was applied to a set of oceanic (SST, CHL-a) and atmospheric parameters (wind speed, air temperature and air specific humidity). This analysis shows that the SST is well correlated to the atmospheric parameters at an annual scale. The windowed cross correlation indicates that this correlation is significant only from October to May. During this period, the warming was related to the solar heating of the surface water when the wind is low (April-May and October) while the cooling (November-March) was linked to the strong and cold North-East winds and to convective mixing. The summer drop in SST followed by a peak of CHL-a, seems strongly correlated to the upwelling. The second EOF modes of SST and CHL-a explain respectively 1.3% and 5% of the variance and show an east-west gradient during winter that is reversed during summer. This work showed that the seasonal signals have a wide spatial influence and dominate the variability of the SST and CHL-a while the east-west gradient are specific for the Gulf of Tadjourah and seem induced by the local wind modulated by the topography.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Following the methodology of Ferreira and Dionísio (2016), the objective of this paper is to analyze the behavior stock markets in the G7 countries and find which of those countries is the first to reach levels of long-range correlations that are not significant. We carry out this analysis using detrended cross-correlation analysis and its correlation coefficient, to check for the existence of long-range dependence in time series. The existence of long-range dependence could be understood as a possibility of EMH violation. This analysis remains interesting because studies are not conclusive about the existence or not of long memory in stock return rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction. This is a pilot study of quantitative electro-encephalographic (QEEG) comodulation analysis, which is used to assist in identifying regional brain differences in those people suffering from chronic fatigue syndrome (CFS) compared to a normative database. The QEEG comodulation analysis examines spatial-temporal cross-correlation of spectral estimates in the resting dominant frequency band. A pattern shown by Sterman and Kaiser (2001) and referred to as the anterior posterior dissociation (APD) discloses a significant reduction in shared functional modulation between frontal and centro-parietal areas of the cortex. This research attempts to examine whether this pattern is evident in CFS. Method. Eleven adult participants, diagnosed by a physician as having CFS, were involved in QEEG data collection. Nineteen-channel cap recordings were made in five conditions: eyes-closed baseline, eyes-open, reading task one, math computations task two, and a second eyes-closed baseline. Results. Four of the 11 participants showed an anterior posterior dissociation pattern for the eyes-closed resting dominant frequency. However, seven of the 11 participants did not show this pattern. Examination of the mean 8-12 Hz amplitudes across three cortical regions (frontal, central and parietal) indicated a trend of higher overall alpha levels in the parietal region in CFS patients who showed the APD pattern compared to those who did not have this pattern. All patients showing the pattern were free of medication, while 71% of those absent of the pattern were using antidepressant medications. Conclusions. Although the sample is small, it is suggested that this method of evaluating the disorder holds promise. The fact that this pattern was not consistently represented in the CFS sample could be explained by the possibility of subtypes of CFS, or perhaps co-morbid conditions. Further, the use of antidepressant medications may mask the pattern by altering the temporal characteristics of the EEG. The results of this pilot study indicate that further research is warranted to verify that the pattern holds across the wider population of CFS sufferers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present a microphone array beamforming approach to blind speech separation. Unlike previous beamforming approaches, our system does not require a-priori knowledge of the microphone placement and speaker location, making the system directly comparable other blind source separation methods which require no prior knowledge of recording conditions. Microphone location is automatically estimated using an assumed noise field model, and speaker locations are estimated using cross correlation based methods. The system is evaluated on the data provided for the PASCAL Speech Separation Challenge 2 (SSC2), achieving a word error rate of 58% on the evaluation set.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The transmission of hemorrhagic fever with renal syndrome (HFRS) is influenced by climatic variables. However, few studies have examined the quantitative relationship between climate variation and HFRS transmission. ---------- Objective: We examined the potential impact of climate variability on HFRS transmission and developed climate-based forecasting models for HFRS in northeastern China. ---------- Methods: We obtained data on monthly counts of reported HFRS cases in Elunchun and Molidawahaner counties for 1997–2007 from the Inner Mongolia Center for Disease Control and Prevention and climate data from the Chinese Bureau of Meteorology. Cross-correlations assessed crude associations between climate variables, including rainfall, land surface temperature (LST), relative humidity (RH), and the multivariate El Niño Southern Oscillation (ENSO) index (MEI) and monthly HFRS cases over a range of lags. We used time-series Poisson regression models to examine the independent contribution of climatic variables to HFRS transmission. ----------- Results: Cross-correlation analyses showed that rainfall, LST, RH, and MEI were significantly associated with monthly HFRS cases with lags of 3–5 months in both study areas. The results of Poisson regression indicated that after controlling for the autocorrelation, seasonality, and long-term trend, rainfall, LST, RH, and MEI with lags of 3–5 months were associated with HFRS in both study areas. The final model had good accuracy in forecasting the occurrence of HFRS. ---------- Conclusions: Climate variability plays a significant role in HFRS transmission in northeastern China. The model developed in this study has implications for HFRS control and prevention.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The quadrupole coupling constants (qcc) for39K and23Na ions in glycerol have been calculated from linewidths measured as a function of temperature (which in turn results in changes in solution viscosity). The qcc of39K in glycerol is found to be 1.7 MHz, and that of23Na is 1.6 MHz. The relaxation behavior of39K and23Na ions in glycerol shows magnetic field and temperature dependence consistent with the equations for transverse relaxation more commonly used to describe the reorientation of nuclei in a molecular framework with intramolecular field gradients. It is shown, however, that τc is not simply proportional to the ratio of viscosity/temperature (ηT). The 39K qcc in glycerol and the value of 1.3 MHz estimated for this nucleus in aqueous solution are much greater than values of 0.075 to 0.12 MHz calculated from T2 measurements of39K in freshly excised rat tissues. This indicates that, in biological samples, processes such as exchange of potassium between intracellular compartments or diffusion of ions through locally ordered regions play a significant role in determining the effective quadrupole coupling constant and correlation time governing39K relaxation. T1 and T2 measurements of rat muscle at two magnetic fields also indicate that a more complex correlation function may be required to describe the relaxation of39K in tissue. Similar results and conclusions are found for23Na.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A frame-rate stereo vision system, based on non-parametric matching metrics, is described. Traditional metrics, such as normalized cross-correlation, are expensive in terms of logic. Non-parametric measures require only simple, parallelizable, functions such as comparators, counters and exclusive-or, and are thus very well suited to implementation in reprogrammable logic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease), we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matern correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data) is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The kinetics of acid-catalyzed hydrolysis of seven methylated aliphatic epoxides - R1R2C(O)CR3R4 (A: R1=R2=R3=R4=H; B: R1=R2=R3=H, R4=Me; C: R1=R2=H, R3=R4=Me; D: R1=R3=H, R2=R4=Me(trans); E: R1=R3=H, R2=R4=Me(cis); F: R1=R3=R4=Me, R2=H; G: R1=R2=R3=R4=Me) - has been studied at 36 ± 1.5°C. Compounds with two methyl groups at the same carbon atom of the oxirane ring exhibit highest rate constants (k(eff) in reciprocal molar concentration per second: 11.0 ± 1.3 for C, 10.7 ± 2.1 for F, and 8.7 ± 0.7 for G as opposed to 0.124 ± 0.003 for B, 0.305 ± 0.003 for D, and 0.635 ± 0.036 for E). Ethylene oxide (A) displays the lowest rate of hydrolysis (0.027 M-1 s-1). The results are consistent with literature data available for compounds A, B, and C. To model the reactivities we have employed quantum chemical calculations (MNDO, AM1, PM3, and MINDO/3) of the main reaction species. There is a correlation of the logarithm k(eff) with the total energy of epoxide ring opening. The best correlation coefficients (r) were obtained using the AM1 and MNDO methods (0.966 and 0.957, respectively). However, unlike MNDO, AM1 predicts approximately zero energy barriers for the oxirane ring opening of compounds B, C, E and G, which is not consistent with published kinetic data. Thus, the MNDO method provides a preferential means of modeling the acidic hydrolysis of the series of methylated oxiranes. The general ranking of mutagenicity in vitro, A > B > C, is in line with the concept that this sequence also gradually leaves the expoxide reactivity optimal for genotoxicity toward reactivities leading to higher biological detoxifications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The high priority of monitoring workers exposed to nitrobenzene is a consequence of clear findings of experimental carcinogenicity of nitrobenzene and the associated evaluations by the International Agency for Research on Cancer. Eighty male employees of a nitrobenzene reduction plant, with potential skin contact with nitrobenzene and aniline, participated in a current medical surveillance programme. Blood samples were routinely taken and analysed for aniline, 4-aminodiphenyl (4-ADP) and benzidine adducts of haemoglobin (Hb) and human serum albumin (HSA). Also, levels of methaemoglobin (Met-Hb) and of carbon monoxide haemoglobin (CO-Hb) were monitored. Effects of smoking were straightforward. Using the rank sum test of Wilcoxon, we found that very clear-cut and statistically significant smoking effects (about 3-fold increases) were apparent on CO-Hb (P = 0.00085) and on the Hb adduct of 4-ADP (P = 0.0006). The mean aniline-Hb adduct level in smokers was 1.5 times higher than in non-smokers; the significance (P = 0.05375) was close to the 5% level. The strongest correlation was evident between the Hb and HSA adducts of aniline (rs = 0.846). Less pronounced correlations (but with P values < 0.02) appeared between aniline-Hb and 4-ADP-Hb adducts (rs = 0.388), between 4-ADP and 4-ADP-HSA adducts (rs = 0.373), and between 4-ADP-Hb and aniline-HSA adducts (rs = 0.275). In view of the proposal for additional use of the aniline-HSA adduct for biological monitoring, particularly in cases of acute overexposures or poisonings, the strong correlation of the Hb and HSA conjugates is noteworthy; the ratio aniline-HSA:aniline-Hb was 1:42 for the entire cohort.