24 resultados para cross-spectral density

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rhythm created by spacing a series of brief tones in a regular pattern can be disguised by interleaving identical distractors at irregular intervals. The disguised rhythm can be unmasked if the distractors are allocated to a separate stream from the rhythm by integration with temporally overlapping captors. Listeners identified which of 2 rhythms was presented, and the accuracy and rated clarity of their judgment was used to estimate the fusion of the distractors and captors. The extent of fusion depended primarily on onset asynchrony and degree of temporal overlap. Harmonic relations had some influence, but only an extreme difference in spatial location was effective (dichotic presentation). Both preattentive and attentionally driven processes governed performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is shown by numerical simulations that a significant increase in the spectral density of a 40-Gb/s wavelength-division-multiplexing (WDM) system can be obtained by controlling the phase of adjacent WDM channels. These simulations are confirmed experimentally at 40 Gb/s using a coherent,comb source. This technique allows the spectral density of a nonreturn-to-zero WDM system to be increased from 0.4 to 1 b/s/Hz in a single polarization. Optical filter optimization is required to minimize power crosstalk, and appropriate strategies are discussed in this letter. Index Terms-Filtering, optical communication terminals, phase control, wavelength-division multiplexing (WDM).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An initial aim of this project was to evaluate the conventional techniques used in the analysis of newly prepared environmentally friendly water-borne automotive coatings and compare them with solvent-borne coatings having comparable formulations. The investigation was carried out on microtuned layers as well as on complete automotive multi-layer paint systems. Methods used included the very traditional methods of gloss and hardness and the commonly used photo-oxidation index (from FTIR spectral analysis). All methods enabled the durability to weathering of the automotive coatings to be initially investigated. However, a primary aim of this work was to develop methods for analysing the early stages of chemical and property changes in both the solvent-borne and water-borne coating systems that take place during outdoor natural weathering exposures and under accelerated artificial exposures. This was achieved by using dynamic mechanical analysis (DMA), in both tension mode on the microtomed films (on all depths of the coating systems from the uppermost clear-coat right down to the electron-coat) and bending mode of the full (unmicrotomed) systems, as well as MALDI-Tof analysis on the movement of the stabilisers in the full systems. Changes in glass transition temperature and relative cross-link density were determined after weathering and these were related to changes in the chemistries of the binder systems of the coatings after weathering. Concentration profiles of the UV-stabilisers (UVA and HALS) in the coating systems were analysed as a consequence of migration in the coating systems in separate microtomed layers of the paint samples (depth profiling) after weathering and diffusion co-efficient and solubility parameters were determined for the UV stabilisers in the coating systems. The methods developed were used to determine the various physical and chemical changes that take place during weathering of the different (water-borne and solvent-borne) systems (photoxidation). The solvent-borne formulations showed less changes after weathering (both natural and accelerated) than the corresponding water-borne formulations due to the lower level of cross-links in the binders of the water-borne systems. The silver systems examined were more durable than the blue systems due to the reflecting power of the aluminium and the lower temperature of the silver coatings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cause of the respective rough and smooth fatigue failure surfaces of Neoprene GS : Neoprene W and Neoprene GS : natural rubber vulcanisates is investigated. The contrasting morphology of the vulcanisates is found to be the major factor determining the fatigue behaviour of the blends. Neoprene GS and Neoprene W appear to form homogeneous blends which exhibit physical properties and fatigue failure surfaces intermediate between those of the two horropolymers. Neoprene GS and natural rubber exhibit heterogeneity when blended together. The morphology of these blends is found to influence both the fatigue resistance and failure surface of the vulcanisates. Exceptional uncut and cut initiated fatigue lives are observed for blends having an interconnecting network morphology. The network structure and cross-link density of the elastomers in the blends and the addition of carbon black and antioxidant are all found to influence the fatigue resistance but not the failure mechanism of the vulcanisate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis consisted of two major parts, one determining the masking characteristics of pixel noise and the other investigating the properties of the detection filter employed by the visual system. The theoretical cut-off frequency of white pixel noise can be defined from the size of the noise pixel. The empirical cut-off frequency, i.e. the largest size of noise pixels that mimics the effect of white noise in detection, was determined by measuring contrast energy thresholds for grating stimuli in the presence of spatial noise consisting of noise pixels of various sizes and shapes. The critical i.e. minimum number of noise pixels per grating cycle needed to mimic the effect of white noise in detection was found to decrease with the bandwidth of the stimulus. The shape of the noise pixels did not have any effect on the whiteness of pixel noise as long as there was at least the minimum number of noise pixels in all spatial dimensions. Furthermore, the masking power of white pixel noise is best described when the spectral density is calculated by taking into account all the dimensions of noise pixels, i.e. width, height, and duration, even when there is random luminance only in one of these dimensions. The properties of the detection mechanism employed by the visual system were studied by measuring contrast energy thresholds for complex spatial patterns as a function of area in the presence of white pixel noise. Human detection efficiency was obtained by comparing human performance with an ideal detector. The stimuli consisted of band-pass filtered symbols, uniform and patched gratings, and point stimuli with randomised phase spectra. In agreement with the existing literature, the detection performance was found to decline with the increasing amount of detail and contour in the stimulus. A measure of image complexity was developed and successfully applied to the data. The accuracy of the detection mechanism seems to depend on the spatial structure of the stimulus and the spatial spread of contrast energy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the random input problem for a nonlinear system modeled by the integrable one-dimensional self-focusing nonlinear Schrödinger equation (NLSE). We concentrate on the properties obtained from the direct scattering problem associated with the NLSE. We discuss some general issues regarding soliton creation from random input. We also study the averaged spectral density of random quasilinear waves generated in the NLSE channel for two models of the disordered input field profile. The first model is symmetric complex Gaussian white noise and the second one is a real dichotomous (telegraph) process. For the former model, the closed-form expression for the averaged spectral density is obtained, while for the dichotomous real input we present the small noise perturbative expansion for the same quantity. In the case of the dichotomous input, we also obtain the distribution of minimal pulse width required for a soliton generation. The obtained results can be applied to a multitude of problems including random nonlinear Fraunhoffer diffraction, transmission properties of randomly apodized long period Fiber Bragg gratings, and the propagation of incoherent pulses in optical fibers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores a new method of analysing muscle fatigue within the muscles predominantly used during microsurgery. The captured electromyographic (EMG) data retrieved from these muscles are analysed for any defining patterns relating to muscle fatigue. The analysis consists of dynamically embedding the EMG signals from a single muscle channel into an embedded matrix. The muscle fatigue is determined by defining its entropy characterized by the singular values of the dynamically embedded (DE) matrix. The paper compares this new method with the traditional method of using mean frequency shifts in the EMG signal's power spectral density. Linear regressions are fitted to the results from both methods, and the coefficients of variation of both their slope and point of intercept are determined. It is shown that the complexity method is slightly more robust in that the coefficient of variation for the DE method has lower variability than the conventional method of mean frequency analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We compute spectra of symmetric random matrices describing graphs with general modular structure and arbitrary inter- and intra-module degree distributions, subject only to the constraint of finite mean connectivities. We also evaluate spectra of a certain class of small-world matrices generated from random graphs by introducing shortcuts via additional random connectivity components. Both adjacency matrices and the associated graph Laplacians are investigated. For the Laplacians, we find Lifshitz-type singular behaviour of the spectral density in a localized region of small |?| values. In the case of modular networks, we can identify contributions of local densities of state from individual modules. For small-world networks, we find that the introduction of short cuts can lead to the creation of satellite bands outside the central band of extended states, exhibiting only localized states in the band gaps. Results for the ensemble in the thermodynamic limit are in excellent agreement with those obtained via a cavity approach for large finite single instances, and with direct diagonalization results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We demonstrate a novel time-resolved Q-factor measurement technique and demonstrate its application in the analysis of optical packet switching systems with high information spectral density. For the first time, we report the time-resolved Q-factor measurement of 42.6 Gbit/s AM-PSK and DQPSK modulated packets, which were generated by a SGDBR laser under wavelength switching. The time dependent degradation of Q-factor performance during the switching transient was analyzed and was found to be correlated with different laser switching characteristics in each case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop an analytical theory which allows us to identify the information spectral density limits of multimode optical fiber transmission systems. Our approach takes into account the Kerr-effect induced interactions of the propagating spatial modes and derives closed-form expressions for the spectral density of the corresponding nonlinear distortion. Experimental characterization results have confirmed the accuracy of the proposed models. Application of our theory in different FMF transmission scenarios has predicted a ~10% variation in total system throughput due to changes associated with inter-mode nonlinear interactions, in agreement with an observed 3dB increase in nonlinear noise power spectral density for a graded index four LP mode fiber. © 2013 Optical Society of America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A broadly tunable master-oscillator power-amplifier (MOPA) picosecond optical pulse source is demonstrated, consisting of an external cavity passively mode-locked laser diode with a tapered semiconductor amplifier. By employing chirped quantum-dot structures on both the oscillator's gain chip and amplifier, a wide tunability range between 1187 and 1283 nm is achieved. Under mode-locked operation, the highest output peak power of 4.39 W is achieved from the MOPA, corresponding to a peak power spectral density of 31.4 dBm/nm. © 1989-2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cardiotocographic data provide physicians information about foetal development and permit to assess conditions such as foetal distress. An incorrect evaluation of the foetal status can be of course very dangerous. To improve interpretation of cardiotocographic recordings, great interest has been dedicated to foetal heart rate variability spectral analysis. It is worth reminding, however, that foetal heart rate is intrinsically an uneven series, so in order to produce an evenly sampled series a zero-order, linear or cubic spline interpolation can be employed. This is not suitable for frequency analyses because interpolation introduces alterations in the foetal heart rate power spectrum. In particular, interpolation process can produce alterations of the power spectral density that, for example, affects the estimation of the sympatho-vagal balance (computed as low-frequency/high-frequency ratio), which represents an important clinical parameter. In order to estimate the frequency spectrum alterations of the foetal heart rate variability signal due to interpolation and cardiotocographic storage rates, in this work, we simulated uneven foetal heart rate series with set characteristics, their evenly spaced versions (with different orders of interpolation and storage rates) and computed the sympatho-vagal balance values by power spectral density. For power spectral density estimation, we chose the Lomb method, as suggested by other authors to study the uneven heart rate series in adults. Summarising, the obtained results show that the evaluation of SVB values on the evenly spaced FHR series provides its overestimation due to the interpolation process and to the storage rate. However, cubic spline interpolation produces more robust and accurate results. © 2010 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective. Using an image analysis system to determine whether there is loss of axons in the olfactory tract (OT) in Alzheimer’s disease (AD). Design. A retrospective neuropathological study. Patients Nine control patients and eight clinically and pathologically verified AD cases. Measurements and Results. There was a reduction in axon density in AD compared with control subjects in the central and peripheral regions of the tract. Axonal loss was mainly of axons with smaller (<2.99 µm2) myelinated cross-sectional areas. Conclusions. The data suggest significant degeneration of axons within the OT involving the smaller sized axons. Loss of axons in the OT is likely to be secondary to pathological changes originating within the parahippocampal gyrus rather than to a pathogen spreading into the brain via the olfactory pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.