634 resultados para Adaptive analysis

em Queensland University of Technology - ePrints Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new mesh adaptivity algorithm that combines a posteriori error estimation with bubble-type local mesh generation (BLMG) strategy for elliptic differential equations is proposed. The size function used in the BLMG is defined on each vertex during the adaptive process based on the obtained error estimator. In order to avoid the excessive coarsening and refining in each iterative step, two factor thresholds are introduced in the size function. The advantages of the BLMG-based adaptive finite element method, compared with other known methods, are given as follows: the refining and coarsening are obtained fluently in the same framework; the local a posteriori error estimation is easy to implement through the adjacency list of the BLMG method; at all levels of refinement, the updated triangles remain very well shaped, even if the mesh size at any particular refinement level varies by several orders of magnitude. Several numerical examples with singularities for the elliptic problems, where the explicit error estimators are used, verify the efficiency of the algorithm. The analysis for the parameters introduced in the size function shows that the algorithm has good flexibility.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To accurately and effectively simulate large deformation is one of the major challenges in numerical modeling of metal forming. In this paper, an adaptive local meshless formulation based on the meshless shape functions and the local weak-form is developed for the large deformation analysis. Total Lagrangian (TL) and the Updated Lagrangian (UL) approaches are used and thoroughly compared each other in computational efficiency and accuracy. It has been found that the developed meshless technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming. In addition, the TL has better computational efficiency than the UL. However, the adaptive analysis is much more efficient using the UL approach than using in the TL approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The large deformation analysis is one of major challenges in numerical modelling and simulation of metal forming. Because no mesh is used, the meshfree methods show good potential for the large deformation analysis. In this paper, a local meshfree formulation, based on the local weak-forms and the updated Lagrangian (UL) approach, is developed for the large deformation analysis. To fully employ the advantages of meshfree methods, a simple and effective adaptive technique is proposed, and this procedure is much easier than the re-meshing in FEM. Numerical examples of large deformation analysis are presented to demonstrate the effectiveness of the newly developed nonlinear meshfree approach. It has been found that the developed meshfree technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The requirement for improved efficiency whilst maintaining system security necessitates the development of improved system analysis approaches and the development of advanced emergency control technologies. Load shedding is a type of emergency control that is designed to ensure system stability by curtailing system load to match generation supply. This paper presents a new adaptive load shedding scheme that provides emergency protection against excess frequency decline, whilst minimizing the risk of line overloading. The proposed load shedding scheme uses the local frequency rate information to adapt the load shedding behaviour to suit the size and location of the experienced disturbance. The proposed scheme is tested in simulation on a 3-region, 10-generator sample system and shows good performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological dynamics characterizes adaptive behavior as an emergent, self-organizing property of interpersonal interactions in complex social systems. The authors conceptualize and investigate constraints on dynamics of decisions and actions in the multiagent system of team sports. They studied coadaptive interpersonal dynamics in rugby union to model potential control parameter and collective variable relations in attacker–defender dyads. A videogrammetry analysis revealed how some agents generated fluctuations by adapting displacement velocity to create phase transitions and destabilize dyadic subsystems near the try line. Agent interpersonal dynamics exhibited characteristics of chaotic attractors and informational constraints of rugby union boxed dyadic systems into a low dimensional attractor. Data suggests that decisions and actions of agents in sports teams may be characterized as emergent, self-organizing properties, governed by laws of dynamical systems at the ecological scale. Further research needs to generalize this conceptual model of adaptive behavior in performance to other multiagent populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of attractors is one of the key tasks in studies of neurobiological coordination from a dynamical systems perspective, with a considerable body of literature resulting from this task. However, with regards to typical movement models investigated, the overwhelming majority of actions studied previously belong to the class of continuous, rhythmical movements. In contrast, very few studies have investigated coordination of discrete movements, particularly multi-articular discrete movements. In the present study, we investigated phase transition behavior in a basketball throwing task where participants were instructed to shoot at the basket from different distances. Adopting the ubiquitous scaling paradigm, throwing distance was manipulated as a candidate control parameter. Using a cluster analysis approach, clear phase transitions between different movement patterns were observed in performance of only two of eight participants. The remaining participants used a single movement pattern and varied it according to throwing distance, thereby exhibiting hysteresis effects. Results suggested that, in movement models involving many biomechanical degrees of freedom in degenerate systems, greater movement variation across individuals is available for exploitation. This observation stands in contrast to movement variation typically observed in studies using more constrained bi-manual movement models. This degenerate system behavior provides new insights and poses fresh challenges to the dynamical systems theoretical approach, requiring further research beyond conventional movement models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: There are innumerable diabetes studies that have investigated associations between risk factors, protective factors, and health outcomes; however, these individual predictors are part of a complex network of interacting forces. Moreover, there is little awareness about resilience or its importance in chronic disease in adulthood, especially diabetes. Thus, this is the first study to: (1) extensively investigate the relationships among a host of predictors and multiple adaptive outcomes; and (2) conceptualise a resilience model among people with diabetes. Methods: This cross-sectional study was divided into two research studies. Study One was to translate two diabetes-specific instruments (Problem Areas In Diabetes, PAID; Diabetes Coping Measure, DCM) into a Chinese version and to examine their psychometric properties for use in Study Two in a convenience sample of 205 outpatients with type 2 diabetes. In Study Two, an integrated theoretical model is developed and evaluated using the structural equation modelling (SEM) technique. A self-administered questionnaire was completed by 345 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Results: Confirmatory factor analyses confirmed a one-factor structure of the PAID-C which was similar to the original version of the PAID. Strong content validity of the PAID-C was demonstrated. The PAID-C was associated with HbA1c and diabetes self-care behaviours, confirming satisfactory criterion validity. There was a moderate relationship between the PAID-C and the Perceived Stress Scale, supporting satisfactory convergent validity. The PAID-C also demonstrated satisfactory stability and high internal consistency. A four-factor structure and strong content validity of the DCM-C was confirmed. Criterion validity demonstrated that the DCM-C was significantly associated with HbA1c and diabetes self-care behaviours. There was a statistical correlation between the DCM-C and the Revised Ways of Coping Checklist, suggesting satisfactory convergent validity. Test-retest reliability demonstrated satisfactory stability of the DCM-C. The total scale of the DCM-C showed adequate internal consistency. Age, duration of diabetes, diabetes symptoms, diabetes distress, physical activity, coping strategies, and social support were the most consistent factors associated with adaptive outcomes in adults with diabetes. Resilience was positively associated with coping strategies, social support, health-related quality of life, and diabetes self-care behaviours. Results of the structural equation modelling revealed protective factors had a significant direct effect on adaptive outcomes; however, the construct of risk factors was not significantly related to adaptive outcomes. Moreover, resilience can moderate the relationships among protective factors and adaptive outcomes, but there were no interaction effects of risk factors and resilience on adaptive outcomes. Conclusion: This study contributes to an understanding of how risk factors and protective factors work together to influence adaptive outcomes in blood sugar control, health-related quality of life, and diabetes self-care behaviours. Additionally, resilience is a positive personality characteristic and may be importantly involved in the adjustment process among people living with type 2 diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to advance understandings of the processes of cluster-building and evolution, or transformative and adaptive change, through the conscious design and reflective activities of private and public actors. A model of transformation is developed which illustrates the importance of actors becoming exposed to new ideas and visions for industrial change by political entrepreneurs and external networks. Further, actors must be guided in their decision-making and action by the new vision, and this requires that they are persuaded of its viability through the provision of test cases and supportive resources and institutions. In order for new ideas to become guiding models, actors must be convinced of their desirability through the portrayal of models as a means of confronting competitive challenges and serving the economic interests of the city/region. Subsequent adaptive change is iterative and reflexive, involving a process of strategic learning amongst key industrial and political actors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attending potentially dangerous and traumatic incidents is inherent in the role of emergency workers, yet there is a paucity of literature aimed at examining variables that impact on the outcomes of such exposure. Coping has been implicated in adjusting to trauma in other contexts, and this study explored the effectiveness of coping strategies in relation to positive and negative posttrauma outcomes in the emergency services environment. One hundred twenty-five paramedics completed a survey battery including the Posttraumatic Growth Inventory (PTGI; Tedeschi & Calhoun, 1996), the Impact of Events Scale–Revised (IES-R; Weiss & Marmar, 1997), and the Revised-COPE (Zuckerman & Gagne, 2003). Results from the regression analysis demonstrated that specific coping strategies were differentially associated with positive and negative posttrauma outcomes. The research contributes to a more comprehensive understanding regarding the effectiveness of coping strategies employed by paramedics in managing trauma, with implications for their psychological well-being as well as the training and support services available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an explanation of why the reuse of building components after demolition or deconstruction is critical to the future of the construction industry. An examination of the historical cause and response to climate change sets the scene as to why governance is becoming increasingly focused on the built environment as a mechanism to controlling waste generation associated with the process of demolition, construction and operation. Through an annotated description to the evolving design and construction methodology of a range of timber dwellings (typically 'Queenslanders' during the eras of 1880-1900, 1900-1920 & 1920-1940) the paper offers an evaluation to the variety of materials, which can be used advantageously by those wishing to 'regenerate' a Queenslander. This analysis of 'regeneration' details the constraints when considering relocation and/ or reuse by adaption including deconstruction of building components against the legislative framework requirements of the Queensland Building Act 1975 and the Queensland Sustainable Planning Act 2009, with a specific examination to those of the Building Codes of Australia. The paper concludes with a discussion of these constraints, their impacts on 'regeneration' and the need for further research to seek greater understanding of the practicalities and drivers of relocation, adaptive and building components suitability for reuse after deconstruction.