426 resultados para Lagrange
Resumo:
This paper describes two algorithms for adaptive power and bit allocations in a multiple input multiple output multiple-carrier code division multiple access (MIMO MC-CDMA) system. The first is the greedy algorithm, which has already been presented in the literature. The other one, which is proposed by the authors, is based on the use of the Lagrange multiplier method. The performances of the two algorithms are compared via Monte Carlo simulations. At present stage, the simulations are restricted to a single user MIMO MC-CDMA system, which is equivalent to a MIMO OFDM system. It is assumed that the system operates in a frequency selective fading environment. The transmitter has a partial knowledge of the channel whose properties are measured at the receiver. The use of the two algorithms results in similar system performances. The advantage of the Lagrange algorithm is that is much faster than the greedy algorithm. ©2005 IEEE
Resumo:
ACM Computing Classification System (1998): G.1.1, G.1.2.
Resumo:
Esta Tesis trata sobre las proyecciones de Lagrange, que son proyecciones conformes del elipsoide de revolución sobre el plano, que transforman los meridianos y paralelos en arcos circulares, y que analizamos y revisamos, con especial cuidado en las fuentes originales (Lambert, Lagrange, Bonnet, Chebyshev, etc.) Como herramienta auxiliar, introducimos en cartografía la función caracter ística de una proyección conforme: m = jf0(z)j{u100000}1, z = + iq, fundamentados en que las curvaturas de las imágenes de los meridianos y paralelos son, según J. L. Lagrange (1779): 1 = {u100000}m y 2 = mq, respectivamente (el subíndice indica derivada parcial). Parametrizamos el elipsoide mediante la longitud geodésica o geográ ca y la latitud isométrica q. Una proyección conforme es de Lagrange si y solo si m q = 0. En este trabajo resolvemos el sistema de ecuaciones: logm = 0, m q = 0. De este modo obtenemos a priori la función característica de las proyecciones de Lagrange y también realizamos una primera clasi cación: rectilíneas, formada por tres familias: Cilíndricas conformes, Cónicas y acimutales conformes y Pseudopolares, esta última es nueva en cartografía; y circulares, formada también por tres familias: De Lagrange-Lambert, Unipolares y Apolares, estas dos últimas nuevas en cartografía. En las rectilíneas todos los meridianos o todos los paralelos se transforman en rectas. En las circulares solo algunos meridianos o paralelos son rectilíneos...
Resumo:
The book also covers the Second Variation, Euler-Lagrange PDE systems, and higher-order conservation laws.
Resumo:
An Euler-Lagrange particle tracking model, developed for simulating fire atmosphere/sprinkler spray interactions, is described. Full details of the model along with the approximations made and restrictions applying are presented. Errors commonly found in previous formulations of the source terms used in this two-phase approach are described and corrected. In order to demonstrate the capabilities of the model it is applied to the simulation of a fire in a long corridor containing a sprinkler. The simulation presented is three-dimensional and transient and considers mass, momentum and energy transfer between the gaseous atmosphere and injected liquid droplets.
Resumo:
Este reporte es el resultado de un doble proceso. Por una parte de un interés surgido en el Seminario de Pensamiento Matemático y por otro de la inquietud por compartir una propuesta didáctica para la enseñanza de un tema en particular. En este enfoque alternativo, el profesor podría dejar de ser el emisor del conocimiento y el estudiante su receptor. Investigaciones recientes de la matemática educativa, ponen en evidencia que el proceso de enseñanza aprendizaje trasciende al mero acto de transmitir un saber. Desde el acercamiento teórico de la socio epistemología, consideramos que la visualización aplicada al tratamiento escolar de una noción juega un papel preponderante en la formación de conceptos y procesos matemáticos entre los alumnos. La intención del póster fue la de mostrar un ejemplo concreto de cómo puede enriquecerse un enfoque educativo si se incluye una situación de aprendizaje en la que se haga uso de la visualización del concepto. En este caso, presentamos una situación en la que el estudiante esté en condiciones de llegar, mediante sus propias nociones y de la movilización de habilidades de visualización, a una construcción del polinomio de Lagrange que pasa por n puntos.
Resumo:
For certain continuum problems, it is desirable and beneficial to combine two different methods together in order to exploit their advantages while evading their disadvantages. In this paper, a bridging transition algorithm is developed for the combination of the meshfree method (MM) with the finite element method (FEM). In this coupled method, the meshfree method is used in the sub-domain where the MM is required to obtain high accuracy, and the finite element method is employed in other sub-domains where FEM is required to improve the computational efficiency. The MM domain and the FEM domain are connected by a transition (bridging) region. A modified variational formulation and the Lagrange multiplier method are used to ensure the compatibility of displacements and their gradients. To improve the computational efficiency and reduce the meshing cost in the transition region, regularly distributed transition particles, which are independent of either the meshfree nodes or the FE nodes, can be inserted into the transition region. The newly developed coupled method is applied to the stress analysis of 2D solids and structures in order to investigate its’ performance and study parameters. Numerical results show that the present coupled method is convergent, accurate and stable. The coupled method has a promising potential for practical applications, because it can take advantages of both the meshfree method and FEM when overcome their shortcomings.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This paper presents the response of pile foundations to ground shocks induced by surface explosion using fully coupled and non-linear dynamic computer simulation techniques together with different material models for the explosive, air, soil and pile. It uses the Arbitrary Lagrange Euler coupling formulation with proper state material parameters and equations. Blast wave propagation in soil, horizontal pile deformation and pile damage are presented to facilitate failure evaluation of piles. Effects of end restraint of pile head and the number and spacing of piles within a group on their blast response and potential failure are investigated. The techniques developed and applied in this paper and its findings provide valuable information on the blast response and failure evaluation of piles and will provide guidance in their future analysis and design.
Resumo:
We present two unconditional secure protocols for private set disjointness tests. In order to provide intuition of our protocols, we give a naive example that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the intersection cardinality. More specifically, it discloses its lower bound. By using the Lagrange interpolation, we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. In this protocol, a verification test is applied to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are the first ones that have been designed without a generic secure function evaluation. More important, they are the most efficient protocols for private disjointness tests in the malicious adversary case.
Resumo:
We present efficient protocols for private set disjointness tests. We start from an intuition of our protocols that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the cardinality of the intersection. More specifically, it discloses its lower bound. By using the Lagrange interpolation we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. The protocol applies a verification test to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are first ones that have been designed without a generic secure function evaluation. More importantly, they are the most efficient protocols for private disjointness tests for the malicious adversary case.
Resumo:
We first classify the state-of-the-art stream authentication problem in the multicast environment and group them into Signing and MAC approaches. A new approach for authenticating digital streams using Threshold Techniques is introduced. The new approach main advantages are in tolerating packet loss, up to a threshold number, and having a minimum space overhead. It is most suitable for multicast applications running over lossy, unreliable communication channels while, in same time, are pertain the security requirements. We use linear equations based on Lagrange polynomial interpolation and Combinatorial Design methods.