981 resultados para optimal hedge ratio
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
The problem of constructing space-time (ST) block codes over a fixed, desired signal constellation is considered. In this situation, there is a tradeoff between the transmission rate as measured in constellation symbols per channel use and the transmit diversity gain achieved by the code. The transmit diversity is a measure of the rate of polynomial decay of pairwise error probability of the code with increase in the signal-to-noise ratio (SNR). In the setting of a quasi-static channel model, let n(t) denote the number of transmit antennas and T the block interval. For any n(t) <= T, a unified construction of (n(t) x T) ST codes is provided here, for a class of signal constellations that includes the familiar pulse-amplitude (PAM), quadrature-amplitude (QAM), and 2(K)-ary phase-shift-keying (PSK) modulations as special cases. The construction is optimal as measured by the rate-diversity tradeoff and can achieve any given integer point on the rate-diversity tradeoff curve. An estimate of the coding gain realized is given. Other results presented here include i) an extension of the optimal unified construction to the multiple fading block case, ii) a version of the optimal unified construction in which the underlying binary block codes are replaced by trellis codes, iii) the providing of a linear dispersion form for the underlying binary block codes, iv) a Gray-mapped version of the unified construction, and v) a generalization of construction of the S-ary case corresponding to constellations of size S-K. Items ii) and iii) are aimed at simplifying the decoding of this class of ST codes.
Resumo:
In a max-min LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0 for nonnegative matrices A and C. We present a local algorithm (constant-time distributed algorithm) for approximating max-min LPs. The approximation ratio of our algorithm is the best possible for any local algorithm; there is a matching unconditional lower bound.
Resumo:
We develop an optimal, distributed, and low feedback timer-based selection scheme to enable next generation rate-adaptive wireless systems to exploit multi-user diversity. In our scheme, each user sets a timer depending on its signal to noise ratio (SNR) and transmits a small packet to identify itself when its timer expires. When the SNR-to-timer mapping is monotone non-decreasing, timers of users with better SNRs expire earlier. Thus, the base station (BS) simply selects the first user whose timer expiry it can detect, and transmits data to it at as high a rate as reliably possible. However, timers that expire too close to one another cannot be detected by the BS due to collisions. We characterize in detail the structure of the SNR-to-timer mapping that optimally handles these collisions to maximize the average data rate. We prove that the optimal timer values take only a discrete set of values, and that the rate adaptation policy strongly influences the optimal scheme's structure. The optimal average rate is very close to that of ideal selection in which the BS always selects highest rate user, and is much higher than that of the popular, but ad hoc, timer schemes considered in the literature.
Resumo:
Feature extraction in bilingual OCR is handicapped by the increase in the number of classes or characters to be handled. This is evident in the case of Indian languages whose alphabet set is large. It is expected that the complexity of the feature extraction process increases with the number of classes. Though the determination of the best set of features that could be used cannot be ascertained through any quantitative measures, the characteristics of the scripts can help decide on the feature extraction procedure. This paper describes a hierarchical feature extraction scheme for recognition of printed bilingual (Tamil and Roman) text. The scheme divides the combined alphabet set of both the scripts into subsets by the extraction of certain spatial and structural features. Three features viz geometric moments, DCT based features and Wavelet transform based features are extracted from the grouped symbols and a linear transformation is performed on them for the purpose of efficient representation in the feature space. The transformation is obtained by the maximization of certain criterion functions. Three techniques : Principal component analysis, maximization of Fisher's ratio and maximization of divergence measure have been employed to estimate the transformation matrix. It has been observed that the proposed hierarchical scheme allows for easier handling of the alphabets and there is an appreciable rise in the recognition accuracy as a result of the transformations.
Resumo:
We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.
Resumo:
Optimal switching angles for minimization of total harmonic distortion of line current (I-THD) in a voltage source inverter are determined traditionally by imposing half-wave symmetry (HWS) and quarter-wave symmetry (QWS) conditions on the pulse width modulated waveform. This paper investigates optimal switching angles with QWS relaxed. Relaxing QWS expands the solution space and presents the possibility of improved solutions. The optimal solutions without QWS are shown here to outperform the optimal solutions with QWS over a range of modulation index (M) between 0.82 and 0.94 for a switching frequency to fundamental frequency ratio of 5. Theoretical and experimental results are presented on a 2.3kW induction motor drive.
Resumo:
The main idea of the Load-Unload Response Ratio (LURR) is that when a system is stable, its response to loading corresponds to its response to unloading, whereas when the system is approaching an unstable state, the response to loading and unloading becomes quite different. High LURR values and observations of Accelerating Moment/Energy Release (AMR/AER) prior to large earthquakes have led different research groups to suggest intermediate-term earthquake prediction is possible and imply that the LURR and AMR/AER observations may have a similar physical origin. To study this possibility, we conducted a retrospective examination of several Australian and Chinese earthquakes with magnitudes ranging from 5.0 to 7.9, including Australia's deadly Newcastle earthquake and the devastating Tangshan earthquake. Both LURR values and best-fit power-law time-to-failure functions were computed using data within a range of distances from the epicenter. Like the best-fit power-law fits in AMR/AER, the LURR value was optimal using data within a certain epicentral distance implying a critical region for LURR. Furthermore, LURR critical region size scales with mainshock magnitude and is similar to the AMR/AER critical region size. These results suggest a common physical origin for both the AMR/AER and LURR observations. Further research may provide clues that yield an understanding of this mechanism and help lead to a solid foundation for intermediate-term earthquake prediction.
Resumo:
Gene-culture co-evolution emphasizes the joint role of culture and genes for the emergence of altruistic and cooperative behaviors and behavioral genetics provides estimates of their relative importance. However, these approaches cannot assess which biological traits determine altruism or how. We analyze the association between altruism in adults and the exposure to prenatal sex hormones, using the second-to-fourth digit ratio. We find an inverted U-shaped relation for left and right hands, which is very consistent for men and less systematic for women. Subjects with both high and low digit ratios give less than individuals with intermediate digit ratios. We repeat the exercise with the same subjects seven months later and find a similar association, even though subjects' behavior differs the second time they play the game. We then construct proxies of the median digit ratio in the population (using more than 1000 different subjects), show that subjects' altruism decreases with the distance of their ratio to these proxies. These results provide direct evidence that prenatal events contribute to the variation of altruistic behavior and that the exposure to fetal hormones is one of the relevant biological factors. In addition, the findings suggest that there might be an optimal level of exposure to these hormones from social perspective.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
1. In previous work, phytoplankton regulation in freshwater lakes has been associated with many factors. Among these, the ratio of total nitrogen to total phosphorus (TN : TP) has been widely proposed as an index to identify whether phytoplankton are N- or P-limited. From another point of view, it has been suggested that planktivorous fish can be used to control phytoplankton. 2. Large-scale investigations of phytoplankton biomass [measured as chlorophyll a, (chl-a)] were carried out in 45 mid-lower Yangtze shallow lakes to test hypotheses concerning nutrient limitation (assessed with TN : TP ratios) and phytoplankton control by planktivorous fish. 3. Regression analyses indicated that TP was the primary regulating factor and TN the second regulating factor for both annual and summer phytoplankton chl-a. In separate nutrient-chl-a regression analyses for lakes of different TN : TP ratios, TP was also superior to TN in predicting chl-a at all particular TN : TP ranges and over the entire TN : TP spectrum. Further analyses found that chl-a : TP was not influenced by TN : TP, while chl-a : TN was positively and highly correlated to TP : TN. 4. Based on these results, and others in the literature, we argue that the TN : TP ratio is inappropriate as an index to identify limiting nutrients. It is almost impossible to specify a 'cut-off' TN : TP ratio to identify a limiting nutrient for a multi-species community because optimal N : P ratios vary greatly among phytoplankton species. 5. Lakes with yields of planktivorous fish (silver and bighead carp, the species native to China) > 100 kg ha(-1) had significantly higher chl-a and lower Secchi depth than those with yields < 100 kg ha(-1). TP-chl-a and TP-Secchi depth relationships are not significantly different between lakes with yields > 100 kg ha(-1) or < 100 kg ha(-1). These results indicate that the fish failed to decrease chl-a yield or enhance Z(SD). Therefore, silver carp and bighead carp are not recommended as a biotic agent for phytoplankton control in lake management if the goal is to control the entire phytoplankton and to enhance water quality.
Resumo:
We study the nature of biomolecular binding. We found that in general there exists several thermodynamic phases: a native binding phase, a non-native phase, and a glass or local trapping phase. The quantitative optimal criterion for the binding specificity is found to be the maximization of the ratio of the binding transition temperature versus the trapping transition temperature, or equivalently the ratio of the energy gap of binding between the native state and the average non-native states versus the dispersion or variance of the non-native states. This leads to a funneled binding energy landscape.
Resumo:
Many problems in early vision are ill posed. Edge detection is a typical example. This paper applies regularization techniques to the problem of edge detection. We derive an optimal filter for edge detection with a size controlled by the regularization parameter $\\ lambda $ and compare it to the Gaussian filter. A formula relating the signal-to-noise ratio to the parameter $\\lambda $ is derived from regularization analysis for the case of small values of $\\lambda$. We also discuss the method of Generalized Cross Validation for obtaining the optimal filter scale. Finally, we use our framework to explain two perceptual phenomena: coarsely quantized images becoming recognizable by either blurring or adding noise.
Resumo:
This paper presents preliminary studies in electroplating using megasonic agitation to avoid the formation of voids within high aspect ratio microvias that are used for the redistribution of interconnects in high density interconnection technology in printed circuit boards. Through this technique, uniform deposition of metal on the side walls of the vias is possible. High frequency acoustic streaming at megasonic frequencies enables the decrease of the Nernst diffusion layer down to the sub-micron range, allowing thereby conformal electrodeposition in deep grooves. This effect enables the normally convection free liquid near the surface to be agitated. Higher throughput and better control of the material properties of the deposits can be achieved for the manufacturing of embedded interconnections and metal-based MEMS. For optimal filling performance of the microvias, a full design of experiments (DOE) and a multi-physics numerical simulation have been conducted to analyse the influence of megasonic agitation on the plating quality of the microvias. Megasonic based deposition has been found to increase the deposition rate as well as improving the quality of the metal deposits.