19 resultados para Series Summation Method

em Aston University Research Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models to identify the lag (or delay) between different variables for such data. Adopting an information-theoretic approach, we develop a procedure for training HMMs to maximise the mutual information (MMI) between delayed time series. The method is used to model the oil drilling process. We show that cross-correlation gives no information and that the MMI approach outperforms maximum likelihood.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of con­vergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the face of global population growth and the uneven distribution of water supply, a better knowledge of the spatial and temporal distribution of surface water resources is critical. Remote sensing provides a synoptic view of ongoing processes, which addresses the intricate nature of water surfaces and allows an assessment of the pressures placed on aquatic ecosystems. However, the main challenge in identifying water surfaces from remotely sensed data is the high variability of spectral signatures, both in space and time. In the last 10 years only a few operational methods have been proposed to map or monitor surface water at continental or global scale, and each of them show limitations. The objective of this study is to develop and demonstrate the adequacy of a generic multi-temporal and multi-spectral image analysis method to detect water surfaces automatically, and to monitor them in near-real-time. The proposed approach, based on a transformation of the RGB color space into HSV, provides dynamic information at the continental scale. The validation of the algorithm showed very few omission errors and no commission errors. It demonstrates the ability of the proposed algorithm to perform as effectively as human interpretation of the images. The validation of the permanent water surface product with an independent dataset derived from high resolution imagery, showed an accuracy of 91.5% and few commission errors. Potential applications of the proposed method have been identified and discussed. The methodology that has been developed 27 is generic: it can be applied to sensors with similar bands with good reliability, and minimal effort. Moreover, this experiment at continental scale showed that the methodology is efficient for a large range of environmental conditions. Additional preliminary tests over other continents indicate that the proposed methodology could also be applied at the global scale without too many difficulties

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper consides the problem of extracting the relationships between two time series in a non-linear non-stationary environment with Hidden Markov Models (HMMs). We describe an algorithm which is capable of identifying associations between variables. The method is applied both to synthetic data and real data. We show that HMMs are capable of modelling the oil drilling process and that they outperform existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most traditional methods for extracting the relationships between two time series are based on cross-correlation. In a non-linear non-stationary environment, these techniques are not sufficient. We show in this paper how to use hidden Markov models (HMMs) to identify the lag (or delay) between different variables for such data. We first present a method using maximum likelihood estimation and propose a simple algorithm which is capable of identifying associations between variables. We also adopt an information-theoretic approach and develop a novel procedure for training HMMs to maximise the mutual information between delayed time series. Both methods are successfully applied to real data. We model the oil drilling process with HMMs and estimate a crucial parameter, namely the lag for return.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. Strabismic amblyopia is typically associated with several visual deficits, including loss of contrast sensitivity in the amblyopic eye and abnormal binocular vision. Binocular summation ratios (BSRs) are usually assessed by comparing contrast sensitivity for binocular stimuli (sens BIN) with that measured in the good eye alone (sensGOOD), giving BSR = sensBIN/sensGOOD. This calculation provides an operational index of clinical binocular function, but does not assess whether neuronal mechanisms for binocular summation of contrast remain intact. This study was conducted to investigate this question. METHODS. Horizontal sine-wave gratings were used as stimuli (3 or 9 cyc/deg; 200 ms), and the conventional method of assessment (above) was compared with one in which the contrast in the amblyopic eye was adjusted (normalized) to equate monocular sensitivities. RESULTS. In nine strabismic amblyopes (mean age, 32 years), the results confirmed that the BSR was close to unity when the conventional method was used (little or no binocular advantage), but increased to approximately √2 or higher when the normalization method was used. The results were similar to those for normal control subjects (n = 3; mean age, 38 years) and were consistent with the physiological summation of contrast between the eyes. When the normal observers performed the experiments with a neutral-density (ND) filter in front of one eye, their performance was similar to that of the amblyopes in both methods of assessment. CONCLUSIONS. The results indicate that strabismic amblyopes have mechanisms for binocular summation of contrast and that the amblyopic deficits of binocularity can be simulated with an ND filter. The implications of these results for best clinical practice are discussed. Copyright © Association for Research in Vision and Ophthalmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contrast sensitivity is better with two eyes than one. The standard view is that thresholds are about 1.4 (v2) times better with two eyes, and that this arises from monocular responses that, near threshold, are proportional to the square of contrast, followed by binocular summation of the two monocular signals. However, estimates of the threshold ratio in the literature vary from about 1.2 to 1.9, and many early studies had methodological weaknesses. We collected extensive new data, and applied a general model of binocular summation to interpret the threshold ratio. We used horizontal gratings (0.25 - 4 cycles deg-1) flickering sinusoidally (1 - 16 Hz), presented to one or both eyes through frame-alternating ferroelectric goggles with negligible cross-talk, and used a 2AFC staircase method to estimate contrast thresholds and psychometric slopes. Four naive observers completed 20 000 trials each, and their mean threshold ratios were 1.63, 1.69, 1.71, 1.81 - grand mean 1.71 - well above the classical v2. Mean ratios tended to be slightly lower (~1.60) at low spatial or high temporal frequencies. We modelled contrast detection very simply by assuming a single binocular mechanism whose response is proportional to (Lm + Rm) p, followed by fixed additive noise, where L,R are contrasts in the left and right eyes, and m, p are constants. Contrast-gain-control effects were assumed to be negligible near threshold. On this model the threshold ratio is 2(?1/m), implying that m=1.3 on average, while the Weibull psychometric slope (median 3.28) equals 1.247mp, yielding p=2.0. Together, the model and data suggest that, at low contrasts across a wide spatiotemporal frequency range, monocular pathways are nearly linear in their contrast response (m close to 1), while a strongly accelerating nonlinearity (p=2, a 'soft threshold') occurs after binocular summation. [Supported by EPSRC project grant GR/S74515/01]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial image-processing stages of visual cortex are well suited to a local (patchwise) analysis of the viewed scene. But the world's structures extend over space as textures and surfaces, suggesting the need for spatial integration. Most models of contrast vision fall shy of this process because (i) the weak area summation at detection threshold is attributed to probability summation (PS) and (ii) there is little or no advantage of area well above threshold. Both of these views are challenged here. First, it is shown that results at threshold are consistent with linear summation of contrast following retinal inhomogeneity, spatial filtering, nonlinear contrast transduction and multiple sources of additive Gaussian noise. We suggest that the suprathreshold loss of the area advantage in previous studies is due to a concomitant increase in suppression from the pedestal. To overcome this confound, a novel stimulus class is designed where: (i) the observer operates on a constant retinal area, (ii) the target area is controlled within this summation field, and (iii) the pedestal is fixed in size. Using this arrangement, substantial summation is found along the entire masking function, including the region of facilitation. Our analysis shows that PS and uncertainty cannot account for the results, and that suprathreshold summation of contrast extends over at least seven target cycles of grating. © 2007 The Royal Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of determining the spatial pattern of any histological feature in sections of brain tissue which can be measured quantitatively is described and compared with a previously described method. A measurement of a histological feature such as density, area, amount or load is obtained for a series of contiguous sample fields. The regression coefficient (β) is calculated from the measurements taken in pairs, first in pairs of adjacent samples and then in pairs of samples taken at increasing degrees of separation between them, i.e. separated by 2, 3, 4,..., n units. A plot of β versus the degree of separation between the pairs of sample fields reveals whether the histological feature is distributed randomly, uniformly or in clusters. If the feature is clustered, the analysis determines whether the clusters are randomly or regularly distributed, the mean size of the clusters and the spacing of the clusters. The method is simple to apply and interpret and is illustrated using simulated data and studies of the spatial patterns of blood vessels in the cerebral cortex of normal brain, the degree of vacuolation of the cortex in patients with Creutzfeldt-Jacob disease (CJD) and the characteristic lesions present in Alzheimer's disease (AD). Copyright (C) 2000 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the problem of information hiding in low dimensional digital data focussing on issues of privacy and security in Electronic Patient Health Records (EPHRs). The thesis proposes a new security protocol based on data hiding techniques for EPHRs. This thesis contends that embedding of sensitive patient information inside the EPHR is the most appropriate solution currently available to resolve the issues of security in EPHRs. Watermarking techniques are applied to one-dimensional time series data such as the electroencephalogram (EEG) to show that they add a level of confidence (in terms of privacy and security) in an individual’s diverse bio-profile (the digital fingerprint of an individual’s medical history), ensure belief that the data being analysed does indeed belong to the correct person, and also that it is not being accessed by unauthorised personnel. Embedding information inside single channel biomedical time series data is more difficult than the standard application for images due to the reduced redundancy. A data hiding approach which has an in built capability to protect against illegal data snooping is developed. The capability of this secure method is enhanced by embedding not just a single message but multiple messages into an example one-dimensional EEG signal. Embedding multiple messages of similar characteristics, for example identities of clinicians accessing the medical record helps in creating a log of access while embedding multiple messages of dissimilar characteristics into an EPHR enhances confidence in the use of the EPHR. The novel method of embedding multiple messages of both similar and dissimilar characteristics into a single channel EEG demonstrated in this thesis shows how this embedding of data boosts the implementation and use of the EPHR securely.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new converter protection method, primarily based on a series dynamic resistor (SDR) that avoids the doubly-fed induction generator (DFIG) control being disabled by crowbar protection during fault conditions. A combined converter protection scheme based on the proposed SDR and conventional crowbar is analyzed and discussed. The main protection advantages are due to the series topology when compared with crowbar and dc-chopper protection. Various fault overcurrent conditions (both symmetrical and asymmetrical) are analyzed and used to design the protection in detail, including the switching strategy and coordination with crowbar, and resistance value calculations. PSCAD/EMTDC simulation results show that the proposed method is advantageous for fault overcurrent protection, especially for asymmetrical faults, in which the traditional crowbar protection may malfunction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an all-fiber method for the generation of ultrafast shaped pulse train bursts from a single pulse based on Fourier Series Developments (FDSs). The implementation of the FSD based filter only requires the use of a very simple non apodized Superimposed Fiber Bragg Grating (S-FBG) for the generation of the Shaped Output Pulse Train Burst (SOPTB). In this approach, the shape, the period and the temporal length of the generated SOPTB have no dependency on the input pulse rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fracture properties of a series of alloys containing 15% chromium and 0.8 to 3.4% carbon are investigated using strain fracture toughness testing techniques. The object of the work is to apply a quantitative method of measuring toughness to abrasion resistant materials, which have previously been assessed on an empirical basis; and to examine the relationship between microstructure and K10 in an attempt to improve the toughness of inherently brittle materials. A review of the relevant literature includes discussion of the background to the alloy series under investigation, a survey of the development of fracture mechanics and the emergence of K10 as a toughness parameter. Metallurgical variables such as composition, heat treatment, grain size, and hot working are ???? to relate microstructure to toughness, and fractographic evidence is used to substantiate the findings. The results are applied to a model correlating ductile fracture with plastic strain instability, and the nucleation of voids. Strain induced martensite formation in austenitic structures is analysed in terms of the plastic energy dissipation mechanisms operating at the crack tip. Emphasis is placed on the lower carbon alloys in the series, and a composition put forward to optimise wear resistance and toughness. The properties of established competitive materials are compared to the proposed alloy on a toughness and cost basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem considered is that of determining the fluid velocity for linear hydrostatics Stokes flow of slow viscous fluids from measured velocity and fluid stress force on a part of the boundary of a bounded domain. A variational conjugate gradient iterative procedure is proposed based on solving a series of mixed well-posed boundary value problems for the Stokes operator and its adjoint. In order to stabilize the Cauchy problem, the iterations are ceased according to an optimal order discrepancy principle stopping criterion. Numerical results obtained using the boundary element method confirm that the procedure produces a convergent and stable numerical solution.