958 resultados para Data frequency
Resumo:
Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.
Resumo:
In some studies, the data are not measurements but comprise counts or frequencies of particular events. In such cases, an investigator may be interested in whether one specific event happens more frequently than another or whether an event occurs with a frequency predicted by a scientific model.
Resumo:
Numerous senile plaques are one of the most characteristic histological findings in SDAT brains. Large classical plaques may develop from smaller uncored forms. There is no strong evidence that, once formed, plaques disappear from the tissue. We have examined cresyl-violet stained sections of the parahippocampal gyrus (PHG), hippocampus, frontal lobe and temporal lobe of five SDAT patients. The frequency of various sizes of plaques were determined in each of these brain regions. Statistical analysis showed that the ratio of large plaques to small plaques was greater in the hippocampal formation (especially the PHG) than in the neocortex. One explanation of these results is that plaques grow more rapidly in the hippocampal formation than elsewhere. Alternatively, if the rate of plaque growth is much the same in different brain regions, the data suggest that plaques develop first in the hippocampal formation (especially the PHG) and only later spread to the neocortex. This interpretation is also consistent with the theory that the neuropathology of SDAT spreads from the olfactory cortex via the hippocampal formation to the neocortex. Further development of this technique may help identify the site of the primary lesion in SDAT.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.
Resumo:
We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.
Resumo:
Warehouse is an essential component in the supply chain, linking the chain partners and providing them with functions of product storage, inbound and outbound operations along with value-added processes. Allocation of warehouse resources should be efficient and effective to achieve optimum productivity and reduce operational costs. Radio frequency identification (RFID) is a technology capable of providing real-time information about supply chain operations. It has been used by warehousing and logistic enterprises to achieve reduced shrinkage, improved material handling and tracking as well as increased accuracy of data collection. However, both academics and practitioners express concerns about challenges to RFID adoption in the supply chain. This paper provides a comprehensive analysis of the problems encountered in RFID implementation at warehouses, discussing the theoretical and practical adoption barriers and causes of not achieving full potential of the technology. Lack of foreseeable return on investment (ROI) and high costs are the most commonly reported obstacles. Variety of standards and radio wave frequencies are identified as source of concern for decision makers. Inaccurate performance of the RFID within the warehouse environment is examined. Description of integration challenges between warehouse management system and RFID technology is given. The paper discusses the existing solutions to technological, investment and performance RFID adoption barriers. Factors to consider when implementing the RFID technology are given to help alleviate implementation problems. By illustrating the challenges of RFID in the warehouse environment and discussing possible solutions the paper aims to help both academics and practitioners to focus on key areas constituting an obstacle to the technology growth. As more studies will address these challenges, the realisation of RFID benefits for warehouses and supply chain will become a reality.
Resumo:
The size frequency distributions of discrete β-amyloid (Aβ) deposits were studied in single sections of the temporal lobe from patients with Alzheimer's disease. The size distributions were unimodal and positively skewed. In 18/25 (72%) tissues examined, a log normal distribution was a good fit to the data. This suggests that the abundances of deposit sizes are distributed randomly on a log scale about a mean value. Three hypotheses were proposed to account for the data: (1) sectioning in a single plane, (2) growth and disappearance of Aβ deposits, and (3) the origin of Aβ deposits from clusters of neuronal cell bodies. Size distributions obtained by serial reconstruction through the tissue were similar to those observed in single sections, which would not support the first hypothesis. The log normal distribution of Aβ deposit size suggests a model in which the rate of growth of a deposit is proportional to its volume. However, mean deposit size and the ratio of large to small deposits were not positively correlated with patient age or disease duration. The frequency distribution of Aβ deposits which were closely associated with 0, 1, 2, 3, or more neuronal cell bodies deviated significantly from a log normal distribution, which would not support the neuronal origin hypothesis. On the basis of the present data, growth and resolution of Aβ deposits would appear to be the most likely explanation for the log normal size distributions.
Resumo:
The size frequency distributions of diffuse, primitive and classic beta/A4 deposits was studied in single sections in the hippocampus, parahippocampal gyrus (PHG) and lateral occipitotemporal gyrus (LOT) in five cases of Alzheimer's disease. In most brain regions, the size distribution of the diffuse deposits was significantly different from that of the primitive and classic deposits. The data suggested that larger diffuse deposits appeared to be converted less often into primitive and classic deposits. Significant differences in the size distribution of primitive deposits were commonly observed between brain regions in which there was no difference in the size distribution of the diffuse deposits. Hence, local brain factors may influence the size of diffuse deposit which can be converted into mature amyloid deposit.
Resumo:
The factors determining the size of individual β-amyloid (A,8) deposits and their size frequency distribution in tissue from Alzheimer's disease (AD) patients have not been established. In 23/25 cortical tissues from 10 AD patients, the frequency of Aβ deposits declined exponentially with increasing size. In a random sample of 400 Aβ deposits, 88% were closely associated with one or more neuronal cell bodies. The frequency distribution of (Aβ) deposits which were associated with 0,1,2,...,n neuronal cell bodies deviated significantly from a Poisson distribution, suggesting a degree of clustering of the neuronal cell bodies. In addition, the frequency of Aβ deposits declined exponentially as the number of associated neuronal cell bodies increased. Aβ deposit area was positively correlated with the frequency of associated neuronal cell bodies, the degree of correlation being greater for pyramidal cells than smaller neurons. These data suggested: (1) the number of closely adjacent neuronal cell bodies which simultaneously secrete Aβ was an important factor determining the size of an Aβ deposit and (2) the exponential decline in larger Aβ deposits reflects the low probability that larger numbers of adjacent neurons will secrete Aβ simultaneously to form a deposit. © 1995.
Resumo:
The use of quantitative methods has become increasingly important in the study of neuropathology and especially in neurodegenerative disease. Disorders such as Alzheimer's disease (AD) and the frontotemporal dementias (FTD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This chapter reviews the advantages and limitations of the different methods of quantifying pathological lesions in histological sections including estimates of density, frequency, coverage, and the use of semi-quantitative scores. The sampling strategies by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are described. In addition, data analysis methods commonly used to analysis quantitative data in neuropathology, including analysis of variance (ANOVA), polynomial curve fitting, multiple regression, classification trees, and principal components analysis (PCA), are discussed. These methods are illustrated with reference to quantitative studies of a variety of neurodegenerative disorders.
Resumo:
Deposition of ß-amyloid (Aß ), a 'signature' pathological lesion of Alzheimer's disease (AD), is also characteristic of Down's syndrome (DS), and has been observed in dementia with Lewy bodies (DLB) and corticobasal degeneration (CBD). To determine whether the growth of Aß deposits was similar in these disorders, the size frequency distributions of the diffuse ('pre-amyloid'), primitive ('neuritic'), and classic ('dense-cored') A ß deposits were compared in AD, DS, DLB, and CBD. All size distributions had essentially the same shape, i.e., they were unimodal and positively skewed. Mean size of Aß deposits, however, varied between disorders. Mean diameters of the diffuse, primitive, and classic deposits were greatest in DS, DS and CBD, and DS, respectively, while the smallest deposits, on average, were recorded in DLB. Although the shape of the frequency distributions was approximately log-normal, the model underestimated the frequency of smaller deposits and overestimated the frequency of larger deposits in all disorders. A 'power-law' model fitted the size distributions of the primitive deposits in AD, DS, and DLB, and the diffuse deposits in AD. The data suggest: (1) similarities in size distributions of Aß deposits among disorders, (2) growth of deposits varies with subtype and disorder, (3) different factors are involved in the growth of the diffuse/primitive and classic deposits, and (4) log-normal and power-law models do not completely account for the size frequency distributions.
Resumo:
Although event-related potentials (ERPs) are widely used to study sensory, perceptual and cognitive processes, it remains unknown whether they are phase-locked signals superimposed upon the ongoing electroencephalogram (EEG) or result from phase-alignment of the EEG. Previous attempts to discriminate between these hypotheses have been unsuccessful but here a new test is presented based on the prediction that ERPs generated by phase-alignment will be associated with event-related changes in frequency whereas evoked-ERPs will not. Using empirical mode decomposition (EMD), which allows measurement of narrow-band changes in the EEG without predefining frequency bands, evidence was found for transient frequency slowing in recognition memory ERPs but not in simulated data derived from the evoked model. Furthermore, the timing of phase-alignment was frequency dependent with the earliest alignment occurring at high frequencies. Based on these findings, the Firefly model was developed, which proposes that both evoked and induced power changes derive from frequency-dependent phase-alignment of the ongoing EEG. Simulated data derived from the Firefly model provided a close match with empirical data and the model was able to account for i) the shape and timing of ERPs at different scalp sites, ii) the event-related desynchronization in alpha and synchronization in theta, and iii) changes in the power density spectrum from the pre-stimulus baseline to the post-stimulus period. The Firefly Model, therefore, provides not only a unifying account of event-related changes in the EEG but also a possible mechanism for cross-frequency information processing.
Resumo:
The main objective of the project is to enhance the already effective health-monitoring system (HUMS) for helicopters by analysing structural vibrations to recognise different flight conditions directly from sensor information. The goal of this paper is to develop a new method to select those sensors and frequency bands that are best for detecting changes in flight conditions. We projected frequency information to a 2-dimensional space in order to visualise flight-condition transitions using the Generative Topographic Mapping (GTM) and a variant which supports simultaneous feature selection. We created an objective measure of the separation between different flight conditions in the visualisation space by calculating the Kullback-Leibler (KL) divergence between Gaussian mixture models (GMMs) fitted to each class: the higher the KL-divergence, the better the interclass separation. To find the optimal combination of sensors, they were considered in pairs, triples and groups of four sensors. The sensor triples provided the best result in terms of KL-divergence. We also found that the use of a variational training algorithm for the GMMs gave more reliable results.
Resumo:
In this paper, we propose a resource allocation scheme to minimize transmit power for multicast orthogonal frequency division multiple access systems. The proposed scheme allows users to have different symbol error rate (SER) across subcarriers and guarantees an average bit error rate and transmission rate for all users. We first provide an algorithm to determine the optimal bits and target SER on subcarriers. Because the worst-case complexity of the optimal algorithm is exponential, we further propose a suboptimal algorithm that separately assigns bit and adjusts SER with a lower complexity. Numerical results show that the proposed algorithm can effectively improve the performance of multicast orthogonal frequency division multiple access systems and that the performance of the suboptimal algorithm is close to that of the optimal one. Copyright © 2012 John Wiley & Sons, Ltd. This paper proposes optimal and suboptimal algorithms for minimizing transmitting power of multicast orthogonal frequency division multiple access systems with guaranteed average bit error rate and data rate requirement. The proposed scheme allows users to have different symbol error rate across subcarriers and guarantees an average bit error rate and transmission rate for all users. Copyright © 2012 John Wiley & Sons, Ltd.