986 resultados para Adaptive threshold
Resumo:
Congenital nystagmus is an ocular-motor disorder characterised by involuntary, conjugated and bilateral to and fro ocular oscillations. In this study a method to recognise automatically jerk waveform inside a congenital nystagmus recording and to compute foveation time and foveation position variability is presented. The recordings were performed with subjects looking at visual targets, presented in nine eye gaze positions; data were segmented into blocks corresponding to each gaze position. The nystagmus cycles were identified searching for local minima and maxima (SpEp sequence) in intervals centred on each slope change of the eye position signal (position criterion). The SpEp sequence was then refined using an adaptive threshold applied to the eye velocity signal; the outcome is a robust detection of each slow phase start point, fundamental to accurately compute some nystagmus parameters. A total of 1206 slow phases was used to compute the specificity in waveform recognition applying only the position criterion or adding the adaptive threshold; results showed an increase in negative predictive value of 25.1% using both features. The duration of each foveation window was measured on raw data or using an interpolating function of the congenital nystagmus slow phases; foveation time estimation less sensitive to noise was obtained in the second case. © 2010.
Resumo:
Congenital nystagmus (CN) is an ocular-motor disorder characterised by involuntary, conjugated ocular oscillations and its pathogenesis is still under investigation. This kind of nystagmus is termed congenital (or infantile) since it could be present at birth or it can arise in the first months of life. Most of CN patients show a considerable decrease of their visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations, mainly horizontal. However, the image of a given target can still be stable during short periods in which eye velocity slows down while the target image is placed onto the fovea (called foveation intervals). To quantify the extent of nystagmus, eye movement recording are routinely employed, allowing physicians to extract and analyse nystagmus main features such as waveform shape, amplitude and frequency. Using eye movement recording, it is also possible to compute estimated visual acuity predictors: analytical functions which estimates expected visual acuity using signal features such as foveation time and foveation position variability. Use of those functions extend the information from typical visual acuity measurement (e.g. Landolt C test) and could be a support for therapy planning or monitoring. This study focuses on detection of CN patients' waveform type and on foveation time measure. Specifically, it proposes a robust method to recognize cycles corresponding to the specific CN waveform in the eye movement pattern and, for those cycles, evaluate the exact signal tracts in which a subject foveates. About 40 eyemovement recordings, either infrared-oculographic or electrooculographic, were acquired from 16 CN subjects. Results suggest that the use of an adaptive threshold applied to the eye velocity signal could improve the estimation of slow phase start point. This can enhance foveation time computing and reduce influence of repositioning saccades and data noise on the waveform type identification.
Resumo:
We investigate the pattern-dependent decoding failures in full-field electronic dispersion compensation (EDC) by offline processing of experimental signals, and find that the performance of such an EDC receiver may be degraded by an isolated "1" bit surrounded by long strings of consecutive "0s". By reducing the probability of occurrence of this kind of isolated "1" and using a novel adaptive threshold decoding method, we greatly improve the compensation performance to achieve 10-Gb/s on-off keyed signal transmission over 496-km field-installed single-mode fiber without optical dispersion compensation.
Resumo:
The statistical minimum risk pattern recognition problem, when the classification costs are random variables of unknown statistics, is considered. Using medical diagnosis as a possible application, the problem of learning the optimal decision scheme is studied for a two-class twoaction case, as a first step. This reduces to the problem of learning the optimum threshold (for taking appropriate action) on the a posteriori probability of one class. A recursive procedure for updating an estimate of the threshold is proposed. The estimation procedure does not require the knowledge of actual class labels of the sample patterns in the design set. The adaptive scheme of using the present threshold estimate for taking action on the next sample is shown to converge, in probability, to the optimum. The results of a computer simulation study of three learning schemes demonstrate the theoretically predictable salient features of the adaptive scheme.
Resumo:
This paper presents an adaptive metering algorithm for enhancing the electronic screening (e-screening) operation at truck weight stations. This algorithm uses a feedback control mechanism to control the level of truck vehicles entering the weight station. The basic operation of the algorithm allows more trucks to be inspected when the weight station is underutilized by adjusting the weight threshold lower. Alternatively, the algorithm restricts the number of trucks to inspect when the station is overutilized to prevent queue spillover. The proposed control concept is demonstrated and evaluated in a simulation environment. The simulation results demonstrate the considerable benefits of the proposed algorithm in improving overweight enforcement with minimal negative impacts on nonoverweighed trucks. The test results also reveal that the effectiveness of the algorithm improves with higher truck participation rates in the e-screening program.
Resumo:
The author presents adaptive control techniques for controlling the flow of real-time jobs from the peripheral processors (PPs) to the central processor (CP) of a distributed system with a star topology. He considers two classes of flow control mechanisms: (1) proportional control, where a certain proportion of the load offered to each PP is sent to the CP, and (2) threshold control, where there is a maximum rate at which each PP can send jobs to the CP. The problem is to obtain good algorithms for dynamically adjusting the control level at each PP in order to prevent overload of the CP, when the load offered by the PPs is unknown and varying. The author formulates the problem approximately as a standard system control problem in which the system has unknown parameters that are subject to change. Using well-known techniques (e.g., naive-feedback-controller and stochastic approximation techniques), he derives adaptive controls for the system control problem. He demonstrates the efficacy of these controls in the original problem by using the control algorithms in simulations of a queuing model of the CP and the load controls.
Resumo:
A comprehensive model of laser propagation in the atmosphere with a complete adaptive optics (AO) system for phase compensation is presented, and a corresponding computer program is compiled. A direct wave-front gradient control method is used to reconstruct the wave-front phase. With the long-exposure Strehl ratio as the evaluation parameter, a numerical simulation of an AO system in a stationary state with the atmospheric propagation of a laser beam was conducted. It was found that for certain conditions the phase screen that describes turbulence in the atmosphere might not be isotropic. Numerical experiments show that the computational results in imaging of lenses by means of the fast Fourier transform (FFT) method agree well with those computed by means of an integration method. However, the computer time required for the FFT method is 1 order of magnitude less than that of the integration method. Phase tailoring of the calculated phase is presented as a means to solve the problem that variance of the calculated residual phase does not correspond to the correction effectiveness of an AO system. It is found for the first time to our knowledge that for a constant delay time of an AO system, when the lateral wind speed exceeds a threshold, the compensation effectiveness of an AO system is better than that of complete phase conjugation. This finding indicates that the better compensation capability of an AO system does not mean better correction effectiveness. (C) 2000 Optical Society of America.
Resumo:
This paper presents a numerical method for the simulation of flow in turbomachinery blade rows using a solution-adaptive mesh methodology. The fully three-dimensional, compressible, Reynolds-averaged Navier-Stokes equations with k-ε turbulence modeling (and low Reynolds number damping terms) are solved on an unstructured mesh formed from tetrahedral finite volumes. At stages in the solution, mesh refinement is carried out based on flagging cell faces with either a fractional variation of a chosen variable (like Mach number) greater than a given threshold or with a mean value of the chosen variable within a given range. Several solutions are presented, including that for the highly three-dimensional flow associated with the corner stall and secondary flow in a transonic compressor cascade, to demonstrate the potential of the new method.
Resumo:
We propose a low-complexity closed-loop spatial multiplexing method with limited feedback over multi-input-multi-output (MIMO) fading channels. The transmit adaptation is simply performed by selecting transmit antennas (or substreams) by comparing their signal-to-noise ratios to a given threshold with a fixed nonadaptive constellation and fixed transmit power per substream. We analyze the performance of the proposed system by deriving closed-form expressions for spectral efficiency, average transmit power, and bit error rate (BER). Depending on practical system design constraints, the threshold is chosen to maximize the spectral efficiency (or minimize the average BER) subject to average transmit power and average BER (or spectral efficiency) constraints, respectively. We present numerical and Monte Carlo simulation results that validate our analysis. Compared to open-loop spatial multiplexing and other approaches that select the best antenna subset in spatial multiplexing, the numerical results illustrate that the proposed technique obtains significant power gains for the same BER and spectral efficiency. We also provide numerical results that show improvement over rate-adaptive orthogonal space-time block coding, which requires highly complex constellation adaptation. We analyze the impact of feedback delay using analytical and Monte Carlo approaches. The proposed approach is arguably the simplest possible adaptive spatial multiplexing system from an implementation point of view. However, our approach and analysis can be extended to other systems using multiple constellations and power levels.
Resumo:
Key feature of a context-aware application is the ability to adapt based on the change of context. Two approaches that are widely used in this regard are the context-action pair mapping where developers match an action to execute for a particular context change and the adaptive learning where a context-aware application refines its action over time based on the preceding action’s outcome. Both these approaches have limitation which makes them unsuitable in situations where a context-aware application has to deal with unknown context changes. In this paper we propose a framework where adaptation is carried out via concurrent multi-action evaluation of a dynamically created action space. This dynamic creation of the action space eliminates the need for relying on the developers to create context-action pairs and the concurrent multi-action evaluation reduces the adaptation time as opposed to the iterative approach used by adaptive learning techniques. Using our reference implementation of the framework we show how it could be used to dynamically determine the threshold price in an e-commerce system which uses the name-your-own-price (NYOP) strategy.
Resumo:
In Wireless Sensor Networks (WSN), neglecting the effects of varying channel quality can lead to an unnecessary wastage of precious battery resources and in turn can result in the rapid depletion of sensor energy and the partitioning of the network. Fairness is a critical issue when accessing a shared wireless channel and fair scheduling must be employed to provide the proper flow of information in a WSN. In this paper, we develop a channel adaptive MAC protocol with a traffic-aware dynamic power management algorithm for efficient packet scheduling and queuing in a sensor network, with time varying characteristics of the wireless channel also taken into consideration. The proposed protocol calculates a combined weight value based on the channel state and link quality. Then transmission is allowed only for those nodes with weights greater than a minimum quality threshold and nodes attempting to access the wireless medium with a low weight will be allowed to transmit only when their weight becomes high. This results in many poor quality nodes being deprived of transmission for a considerable amount of time. To avoid the buffer overflow and to achieve fairness for the poor quality nodes, we design a Load prediction algorithm. We also design a traffic aware dynamic power management scheme to minimize the energy consumption by continuously turning off the radio interface of all the unnecessary nodes that are not included in the routing path. By Simulation results, we show that our proposed protocol achieves a higher throughput and fairness besides reducing the delay
Resumo:
The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet is presented. The image denoising in wavelet domain has been extended to the directionlet domain to make the image features to concentrate on fewer coefficients so that more effective thresholding is possible. The image is first segmented and the dominant direction of each segment is identified to make a directional map. Then according to the directional map, the directionlet transform is taken along the dominant direction of the selected segment. The decomposed images with directional energy are used for scale dependent subband adaptive optimal threshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLL subband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL) are given as input to the inverse directionlet algorithm for getting the de-noised image. Experimental results show that the proposed method outperforms the standard wavelet-based denoising methods in terms of numeric and visual quality