954 resultados para Weighted distributions
Resumo:
The paper deals with a linearization technique in non-linear oscillations for systems which are governed by second-order non-linear ordinary differential equations. The method is based on approximation of the non-linear function by a linear function such that the error is least in the weighted mean square sense. The method has been applied to cubic, sine, hyperbolic sine, and odd polynomial types of non-linearities and the results obtained are more accurate than those given by existing linearization methods.
Resumo:
In this paper, we examine the predictability of observed volatility smiles in three major European index options markets, utilising the historical return distributions of the respective underlying assets. The analysis involves an application of the Black (1976) pricing model adjusted in accordance with the Jarrow-Rudd methodology as proposed in 1982. Thereby we adjust the expected future returns for the third and fourth central moments as these represent deviations from normality in the distributions of observed returns. Thus, they are considered one possible explanation to the existence of the smile. The obtained results indicate that the inclusion of the higher moments in the pricing model to some extent reduces the volatility smile, compared with the unadjusted Black-76 model. However, as the smile is partly a function of supply, demand, and liquidity, and as such intricate to model, this modification does not appear sufficient to fully capture the characteristics of the smile.
Resumo:
Reconstructions in optical tomography involve obtaining the images of absorption and reduced scattering coefficients. The integrated intensity data has greater sensitivity to absorption coefficient variations than scattering coefficient. However, the sensitivity of intensity data to scattering coefficient is not zero. We considered an object with two inhomogeneities (one in absorption and the other in scattering coefficient). The standard iterative reconstruction techniques produced results, which were plagued by cross talk, i.e., the absorption coefficient reconstruction has a false positive corresponding to the location of scattering inhomogeneity, and vice-versa. We present a method to remove cross talk in the reconstruction, by generating a weight matrix and weighting the update vector during the iteration. The weight matrix is created by the following method: we first perform a simple backprojection of the difference between the experimental and corresponding homogeneous intensity data. The built up image has greater weightage towards absorption inhomogeneity than the scattering inhomogeneity and its appropriate inverse is weighted towards the scattering inhomogeneity. These two weight matrices are used as multiplication factors in the update vectors, normalized backprojected image of difference intensity for absorption inhomogeneity and the inverse of the above for the scattering inhomogeneity, during the image reconstruction procedure. We demonstrate through numerical simulations, that cross-talk is fully eliminated through this modified reconstruction procedure.
Resumo:
Receive antenna selection (AS) reduces the hardware complexity of multi-antenna receivers by dynamically connecting an instantaneously best antenna element to the available radio frequency (RF) chain. Due to the hardware constraints, the channels at various antenna elements have to be sounded sequentially to obtain estimates that are required for selecting the ``best'' antenna and for coherently demodulating data. Consequently, the channel state information at different antennas is outdated by different amounts. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, the channel estimates of different antennas should be weighted differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then derive an explicit formula for the optimal selection weights that minimize the SEP. We find that when selection weights are not used, the SEP need not improve as the number of antenna elements increases, which is in contrast to the ideal channel estimation case. However, the optimal selection weights remedy this situation and significantly improve performance.
Resumo:
A method is presented for optimising the performance indices of aperture antennas in the presence of blockage. An N-dimensional objective function is formed for maximising the directivity factor of a circular aperture with blockage under sidelobe-level constraints, and is minimised using the simplex search method. Optimum aperture distributions are computed for a circular aperture with blockage of circular geometry that gives the maximum directivity factor under sidelobe-level constraints.
Resumo:
Myrkyllisten aineiden jakaumat ja vaikutusmallit jätealueiden ympäristöriskien analyysissä.
Resumo:
We have derived explicitly, the large scale distribution of quantum Ohmic resistance of a disordered one-dimensional conductor. We show that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder we recover single parameter scaling, consistent with existing theoretical treatments.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.
Resumo:
We study the photon-number distribution in squeezed states of a single-mode radiation field. A U(l)-invariant squeezing criterion is compared and contrasted with a more restrictive criterion, with the help of suggestive geometric representations. The U(l) invariance of the photon-number distribution in a squeezed coherent state, with arbitrary complex squeeze and displacement parameters, is explicitly demonstrated. The behavior of the photon-number distribution for a representative value of the displacement and various values of the squeeze parameter is numerically investigated. A new kind of giant oscillation riding as an envelope over more rapid oscillations in this distribution is demonstrated.
Resumo:
We use parallel weighted finite-state transducers to implement a part-of-speech tagger, which obtains state-of-the-art accuracy when used to tag the Europarl corpora for Finnish, Swedish and English. Our system consists of a weighted lexicon and a guesser combined with a bigram model factored into two weighted transducers. We use both lemmas and tag sequences in the bigram model, which guarantees reliable bigram estimates.
Resumo:
In this paper we present simple methods for construction and evaluation of finite-state spell-checking tools using an existing finite-state lexical automaton, freely available finite-state tools and Internet corpora acquired from projects such as Wikipedia. As an example, we use a freely available open-source implementation of Finnish morphology, made with traditional finite-state morphology tools, and demonstrate rapid building of Northern Sámi and English spell checkers from tools and resources available from the Internet.
Resumo:
In this paper, we present the design and bit error performance analysis of weighted linear parallel interference cancellers (LPIC) for multicarrier (MC) DS-CDMA systems. We propose an LPIC scheme where we estimate (and cancel) the multiple access interference (MAI) based on the soft outputs on individual subcarriers, and the interference cancelled outputs on different subcarriers are combined to form the final decision statistic. We scale the MAI estimate on individual subcarriers by a weight before cancellation; these weights are so chosen to maximize the signal-to-interference ratios at the individual subcarrier outputs. For this weighted LPIC scheme, using an approach involving the characteristic function of the decision variable, we derive exact bit error rate (BER) expressions for different cancellation stages. Using the same approach, we also derive exact BER expressions for the matched filter (MF) and decorrelating detectors for the considered MC DS-CDMA system. We show that the proposed weighted LPIC scheme performs better than the MF detector and the conventional LPIC (where the weights are taken to be unity), and close to the decorrelating detector.