14 resultados para Lagrange, Funciones de
em Queensland University of Technology - ePrints Archive
Resumo:
For certain continuum problems, it is desirable and beneficial to combine two different methods together in order to exploit their advantages while evading their disadvantages. In this paper, a bridging transition algorithm is developed for the combination of the meshfree method (MM) with the finite element method (FEM). In this coupled method, the meshfree method is used in the sub-domain where the MM is required to obtain high accuracy, and the finite element method is employed in other sub-domains where FEM is required to improve the computational efficiency. The MM domain and the FEM domain are connected by a transition (bridging) region. A modified variational formulation and the Lagrange multiplier method are used to ensure the compatibility of displacements and their gradients. To improve the computational efficiency and reduce the meshing cost in the transition region, regularly distributed transition particles, which are independent of either the meshfree nodes or the FE nodes, can be inserted into the transition region. The newly developed coupled method is applied to the stress analysis of 2D solids and structures in order to investigate its’ performance and study parameters. Numerical results show that the present coupled method is convergent, accurate and stable. The coupled method has a promising potential for practical applications, because it can take advantages of both the meshfree method and FEM when overcome their shortcomings.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This paper presents the response of pile foundations to ground shocks induced by surface explosion using fully coupled and non-linear dynamic computer simulation techniques together with different material models for the explosive, air, soil and pile. It uses the Arbitrary Lagrange Euler coupling formulation with proper state material parameters and equations. Blast wave propagation in soil, horizontal pile deformation and pile damage are presented to facilitate failure evaluation of piles. Effects of end restraint of pile head and the number and spacing of piles within a group on their blast response and potential failure are investigated. The techniques developed and applied in this paper and its findings provide valuable information on the blast response and failure evaluation of piles and will provide guidance in their future analysis and design.
Resumo:
La creación del término resiliencia en salud es un paso importante hacia la construcción de comunidades más resilientes para afrontar mejor los desastres futuros. Hasta la fecha, sin embargo, parece que hay poca literatura sobre cómo el concepto de resiliencia en salud debe ser definido. Este artículo tiene como objetivo construir un enfoque de gestión de desastres de salud integral guiado por el concepto de resiliencia. Se realizaron busquedas en bases de datos electrónicas de salud para recuperar publicaciones críticas que pueden haber contribuido a los fines y objetivos de la investigación. Un total de 61 publicaciones se incluyeron en el análisis final de este documento, que se centraron en aquéllas que proporcionan una descripción completa de las teorías y definiciones de resiliencia ante los desastres y las que proponen una definición y un marco conceptual para la capacidad de resiliencia en salud. La resiliencia es una capacidad inherente de adaptación para hacer frente a la incertidumbre del futuro. Esto implica el uso de múltiples estrategias, un enfoque de riesgos máximos y tratar de lograr un resultado positivo a través de la vinculación y cooperación entre los distintos elementos de la comunidad. Resiliencia en salud puede definirse como la capacidad de las organizaciones de salud para resistir, absorber, y responder al impacto de los desastres, mientras mantiene las funciones esenciales y se recupera a su estado original o se adapta a un nuevo estado. Puede evaluarse por criterios como la robustez, la redundancia, el ingenio y la rapidez e incluye las dimensiones clave de la vulnerabilidad y la seguridad, los recursos y la preparación para casos de desastre, la continuidad de los servicios esenciales de salud, la recuperación y la adaptación. Este nuevo concepto define las capacidades en gestión de desastres de las organizaciones sanitarias, las tareas de gestión, actividades y resultados de desastres juntos en una visión de conjunto integral, y utiliza un enfoque integrado y con un objetivo alcanzable. Se necesita urgentemente investigación futura de su medición
Resumo:
We present two unconditional secure protocols for private set disjointness tests. In order to provide intuition of our protocols, we give a naive example that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the intersection cardinality. More specifically, it discloses its lower bound. By using the Lagrange interpolation, we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. In this protocol, a verification test is applied to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are the first ones that have been designed without a generic secure function evaluation. More important, they are the most efficient protocols for private disjointness tests in the malicious adversary case.
Resumo:
We present efficient protocols for private set disjointness tests. We start from an intuition of our protocols that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the cardinality of the intersection. More specifically, it discloses its lower bound. By using the Lagrange interpolation we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. The protocol applies a verification test to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are first ones that have been designed without a generic secure function evaluation. More importantly, they are the most efficient protocols for private disjointness tests for the malicious adversary case.
Resumo:
We first classify the state-of-the-art stream authentication problem in the multicast environment and group them into Signing and MAC approaches. A new approach for authenticating digital streams using Threshold Techniques is introduced. The new approach main advantages are in tolerating packet loss, up to a threshold number, and having a minimum space overhead. It is most suitable for multicast applications running over lossy, unreliable communication channels while, in same time, are pertain the security requirements. We use linear equations based on Lagrange polynomial interpolation and Combinatorial Design methods.
Resumo:
This paper develops and presents a fully coupled non-linear finite element procedure to treat the response of piles to ground shocks induced by underground explosions. The Arbitrary Lagrange Euler coupling formulation with proper state material parameters and equations are used in the study. Pile responses in four different soil types, viz, saturated soil, partially saturated soil and loose and dense dry soils are investigated and the results compared. Numerical results are validated by comparing with those from a standard design manual. Blast wave propagation in soils, horizontal pile deformations and damages in the pile are presented. The pile damage presented through plastic strain diagrams will enable the vulnerability assessment of the piles under the blast scenarios considered. The numerical results indicate that the blast performance of the piles embedded in saturated soil and loose dry soil are more severe than those in piles embedded in partially saturated soil and dense dry soil. Present findings should serve as a benchmark reference for future analysis and design.
Resumo:
This paper introduces the smooth transition logit (STL) model that is designed to detect and model situations in which there is structural change in the behaviour underlying the latent index from which the binary dependent variable is constructed. The maximum likelihood estimators of the parameters of the model are derived along with their asymptotic properties, together with a Lagrange multiplier test of the null hypothesis of linearity in the underlying latent index. The development of the STL model is motivated by the desire to assess the impact of deregulation in the Queensland electricity market and ascertain whether increased competition has resulted in significant changes in the behaviour of the spot price of electricity, specifically with respect to the occurrence of periodic abnormally high prices. The model allows the timing of any change to be endogenously determined and also market participants' behaviour to change gradually over time. The main results provide clear evidence in support of a structural change in the nature of price events, and the endogenously determined timing of the change is consistent with the process of deregulation in Queensland.
Resumo:
Crystallization of amorphous germanium (a-Ge) by laser or electron beam heating is a remarkably complex process that involves several distinct modes of crystal growth and the development of intricate microstructural patterns on the nanosecond to ten microsecond time scales. Here we use dynamic transmission electron microscopy (DTEM) to study the fast, complex crystallization dynamics with 10 nm spatial and 15 ns temporal resolution. We have obtained time-resolved real-space images of nanosecond laser-induced crystallization in a-Ge with unprecedentedly high spatial resolution. Direct visualization of the crystallization front allows for time-resolved snapshots of the initiation and roughening of the dendrites on submicrosecond time scales. This growth is followed by a rapid transition to a ledgelike growth mechanism that produces a layered microstructure on a time scale of several microseconds. This study provides insights into the mechanisms governing this complex crystallization process and is a dramatic demonstration of the power of DTEM for studying time-dependent material processes far from equilibrium.
Resumo:
The crystallization of amorphous semiconductors is a strongly exothermic process. Once initiated the release of latent heat can be sufficient to drive a self-sustaining crystallization front through the material in a manner that has been described as explosive. Here, we perform a quantitative in situ study of explosive crystallization in amorphous germanium using dynamic transmission electron microscopy. Direct observations of the speed of the explosive crystallization front as it evolves along a laser-imprinted temperature gradient are used to experimentally determine the complete interface response function (i.e., the temperature-dependent front propagation speed) for this process, which reaches a peak of 16 m/s. Fitting to the Frenkel-Wilson kinetic law demonstrates that the diffusivity of the material locally/immediately in advance of the explosive crystallization front is inconsistent with those of a liquid phase. This result suggests a modification to the liquid-mediated mechanism commonly used to describe this process that replaces the phase change at the leading amorphous-liquid interface with a change in bonding character (from covalent to metallic) occurring in the hot amorphous material.
Resumo:
A test for time-varying correlation is developed within the framework of a dynamic conditional score (DCS) model for both Gaussian and Student t-distributions. The test may be interpreted as a Lagrange multiplier test and modified to allow for the estimation of models for time-varying volatility in the individual series. Unlike standard moment-based tests, the score-based test statistic includes information on the level of correlation under the null hypothesis and local power arguments indicate the benefits of doing so. A simulation study shows that the performance of the score-based test is strong relative to existing tests across a range of data generating processes. An application to the Hong Kong and South Korean equity markets shows that the new test reveals changes in correlation that are not detected by the standard moment-based test.