891 resultados para Nonlinear dynamic analysis
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This paper presents the stability analysis for a distribution static compensator (DSTATCOM) that operates in current control mode based on bifurcation theory. Bifurcations delimit the operating zones of nonlinear circuits and, hence, the capability to compute these bifurcations is of important interest for practical design. A control design for the DSTATCOM is proposed. Along with this control, a suitable mathematical representation of the DSTATCOM is proposed to carry out the bifurcation analysis efficiently. The stability regions in the Thevenin equivalent plane are computed for different power factors at the point of common coupling. In addition, the stability regions in the control gain space, as well as the contour lines for different Floquet multipliers are computed. It is demonstrated through bifurcation analysis that the loss of stability in the DSTATCOM is due to the emergence of a Neimark bifurcation. The observations are verified through simulation studies.
Resumo:
With the increase in the level of global warming, renewable energy based distributed generators (DGs) will increasingly play a dominant role in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cells and micro turbines will gain considerable momentum in the near future. A microgrid consists of clusters of load and distributed generators that operate as a single controllable system. The interconnection of the DG to the utility/grid through power electronic converters has raised concern about safe operation and protection of the equipments. Many innovative control techniques have been used for enhancing the stability of microgrid as for proper load sharing. The most common method is the use of droop characteristics for decentralized load sharing. Parallel converters have been controlled to deliver desired real power (and reactive power) to the system. Local signals are used as feedback to control converters, since in a real system, the distance between the converters may make the inter-communication impractical. The real and reactive power sharing can be achieved by controlling two independent quantities, frequency and fundamental voltage magnitude. In this thesis, an angle droop controller is proposed to share power amongst converter interfaced DGs in a microgrid. As the angle of the output voltage can be changed instantaneously in a voltage source converter (VSC), controlling the angle to control the real power is always beneficial for quick attainment of steady state. Thus in converter based DGs, load sharing can be performed by drooping the converter output voltage magnitude and its angle instead of frequency. The angle control results in much lesser frequency variation compared to that with frequency droop. An enhanced frequency droop controller is proposed for better dynamic response and smooth transition between grid connected and islanded modes of operation. A modular controller structure with modified control loop is proposed for better load sharing between the parallel connected converters in a distributed generation system. Moreover, a method for smooth transition between grid connected and islanded modes is proposed. Power quality enhanced operation of a microgrid in presence of unbalanced and non-linear loads is also addressed in which the DGs act as compensators. The compensator can perform load balancing, harmonic compensation and reactive power control while supplying real power to the grid A frequency and voltage isolation technique between microgrid and utility is proposed by using a back-to-back converter. As utility and microgrid are totally isolated, the voltage or frequency fluctuations in the utility side do not affect the microgrid loads and vice versa. Another advantage of this scheme is that a bidirectional regulated power flow can be achieved by the back-to-back converter structure. For accurate load sharing, the droop gains have to be high, which has the potential of making the system unstable. Therefore the choice of droop gains is often a tradeoff between power sharing and stability. To improve this situation, a supplementary droop controller is proposed. A small signal model of the system is developed, based on which the parameters of the supplementary controller are designed. Two methods are proposed for load sharing in an autonomous microgrid in rural network with high R/X ratio lines. The first method proposes power sharing without any communication between the DGs. The feedback quantities and the gain matrixes are transformed with a transformation matrix based on the line R/X ratio. The second method involves minimal communication among the DGs. The converter output voltage angle reference is modified based on the active and reactive power flow in the line connected at point of common coupling (PCC). It is shown that a more economical and proper power sharing solution is possible with the web based communication of the power flow quantities. All the proposed methods are verified through PSCAD simulations. The converters are modeled with IGBT switches and anti parallel diodes with associated snubber circuits. All the rotating machines are modeled in detail including their dynamics.
Resumo:
One of the main causes of above knee or transfemoral amputation (TFA) in the developed world is trauma to the limb. The number of people undergoing TFA due to limb trauma, particularly due to war injuries, has been increasing. Typically the trauma amputee population, including war-related amputees, are otherwise healthy, active and desire to return to employment and their usual lifestyle. Consequently there is a growing need to restore long-term mobility and limb function to this population. Traditionally transfemoral amputees are provided with an artificial or prosthetic leg that consists of a fabricated socket, knee joint mechanism and a prosthetic foot. Amputees have reported several problems related to the socket of their prosthetic limb. These include pain in the residual limb, poor socket fit, discomfort and poor mobility. Removing the socket from the prosthetic limb could eliminate or reduce these problems. A solution to this is the direct attachment of the prosthesis to the residual bone (femur) inside the residual limb. This technique has been used on a small population of transfemoral amputees since 1990. A threaded titanium implant is screwed in to the shaft of the femur and a second component connects between the implant and the prosthesis. A period of time is required to allow the implant to become fully attached to the bone, called osseointegration (OI), and be able to withstand applied load; then the prosthesis can be attached. The advantages of transfemoral osseointegration (TFOI) over conventional prosthetic sockets include better hip mobility, sitting comfort and prosthetic retention and fewer skin problems on the residual limb. However, due to the length of time required for OI to progress and to complete the rehabilitation exercises, it can take up to twelve months after implant insertion for an amputee to be able to load bear and to walk unaided. The long rehabilitation time is a significant disadvantage of TFOI and may be impeding the wider adoption of the technique. There is a need for a non-invasive method of assessing the degree of osseointegration between the bone and the implant. If such a method was capable of determining the progression of TFOI and assessing when the implant was able to withstand physiological load it could reduce the overall rehabilitation time. Vibration analysis has been suggested as a potential technique: it is a non destructive method of assessing the dynamic properties of a structure. Changes in the physical properties of a structure can be identified from changes in its dynamic properties. Consequently vibration analysis, both experimental and computational, has been used to assess bone fracture healing, prosthetic hip loosening and dental implant OI with varying degrees of success. More recently experimental vibration analysis has been used in TFOI. However further work is needed to assess the potential of the technique and fully characterise the femur-implant system. The overall aim of this study was to develop physical and computational models of the TFOI femur-implant system and use these models to investigate the feasibility of vibration analysis to detect the process of OI. Femur-implant physical models were developed and manufactured using synthetic materials to represent four key stages of OI development (identified from a physiological model), simulated using different interface conditions between the implant and femur. Experimental vibration analysis (modal analysis) was then conducted using the physical models. The femur-implant models, representing stage one to stage four of OI development, were excited and the modal parameters obtained over the range 0-5kHz. The results indicated the technique had limited capability in distinguishing between different interface conditions. The fundamental bending mode did not alter with interfacial changes. However higher modes were able to track chronological changes in interface condition by the change in natural frequency, although no one modal parameter could uniquely distinguish between each interface condition. The importance of the model boundary condition (how the model is constrained) was the key finding; variations in the boundary condition altered the modal parameters obtained. Therefore the boundary conditions need to be held constant between tests in order for the detected modal parameter changes to be attributed to interface condition changes. A three dimensional Finite Element (FE) model of the femur-implant model was then developed and used to explore the sensitivity of the modal parameters to more subtle interfacial and boundary condition changes. The FE model was created using the synthetic femur geometry and an approximation of the implant geometry. The natural frequencies of the FE model were found to match the experimental frequencies within 20% and the FE and experimental mode shapes were similar. Therefore the FE model was shown to successfully capture the dynamic response of the physical system. As was found with the experimental modal analysis, the fundamental bending mode of the FE model did not alter due to changes in interface elastic modulus. Axial and torsional modes were identified by the FE model that were not detected experimentally; the torsional mode exhibited the largest frequency change due to interfacial changes (103% between the lower and upper limits of the interface modulus range). Therefore the FE model provided additional information on the dynamic response of the system and was complementary to the experimental model. The small changes in natural frequency over a large range of interface region elastic moduli indicated the method may only be able to distinguish between early and late OI progression. The boundary conditions applied to the FE model influenced the modal parameters to a far greater extent than the interface condition variations. Therefore the FE model, as well as the experimental modal analysis, indicated that the boundary conditions need to be held constant between tests in order for the detected changes in modal parameters to be attributed to interface condition changes alone. The results of this study suggest that in a clinical setting it is unlikely that the in vivo boundary conditions of the amputated femur could be adequately controlled or replicated over time and consequently it is unlikely that any longitudinal change in frequency detected by the modal analysis technique could be attributed exclusively to changes at the femur-implant interface. Therefore further development of the modal analysis technique would require significant consideration of the clinical boundary conditions and investigation of modes other than the bending modes.
Resumo:
The purpose of this study is to contribute to the cross-disciplinary body of literature of identity and organisational culture. This study empirically investigated the Hatch and Schultz (2002) Organisational Identity Dynamics (OID) model to look at linkages between identity, image, and organisational culture. This study used processes defined in the OID model as a theoretical frame by which to understand the relationships between actual and espoused identity manifestations across visual identity, corporate identity, and organisational identity. The linking processes of impressing, mirroring, reflecting, and expressing were discussed at three unique levels in the organisation. The overarching research question of How does the organisational identity dynamics process manifest itself in practice at different levels within an organisation? was used as a means of providing empirical understanding to the previously theoretical OID model. Case study analysis was utilised to provide exploratory data across the organisational groups of: Level A - Senior Marketing and Corporate Communications Management, Level B - Marketing and Corporate Communications Staff, and Level C - Non-Marketing Managers and Employees. Data was collected via 15 in-depth interviews with documentary analysis used as a supporting mechanism to provide triangulation in analysis. Data was analysed against the impressing, mirroring, reflecting, and expressing constructs with specific criteria developed from literature to provide a detailed analysis of each process. Conclusions revealed marked differences in the ways in which OID processes occurred across different levels with implications for the ways in which VI, CI, and OI interact to develop holistic identity across organisational levels. Implications for theory detail the need to understand and utilise cultural understanding in identity programs as well as the value in developing identity communications which represent an actual rather than an espoused position.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
The Australian tourism tertiary education sector operates in a competitive and dynamic environment, which necessitates a market orientation to be successful. Academic staff and management in the sector must regularly assess the perceptions of prospective and current students and monitor the satisfaction levels of current students. This study is concerned with the setting and monitoring of satisfaction levels of current students, reporting the results of three longitudinal investigations of student satisfaction in a postgraduate unit. The study also addresses a limitation of a university’s generic teaching evaluation instrument. Importance-Performance Analysis (IPA) has been recommended as a simple but effective tool for overcoming the deficiencies of many student evaluation studies, which have generally measured only attribute performance at the end of a semester. IPA was used to compare student expectations of the unit at the beginning of a semester with their perceptions of performance 10 weeks later. The first stage documented key benchmarks for which amendments to the unit based on student feedback could be evaluated during subsequent teaching periods.
Resumo:
For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.
Resumo:
A finite element numerical simulation is carried out to examine stress distributions on railhead in the cicinity of the endpost of an insulated rail joint. The contact patch and pressure distribution are considered using modified Hertzian simulation. A combined elasto-plastic material modelling available in Abaqus is employed in the simulation. A dynamic load factor of 1.21 is considered in modelling for the wheel load based on a previous study as part of this on going research. Shakedown theorem is employed in this study. A peak pressure load which is above the shakedown limit is determined as input load. As a result, a progressive damage in the railhead has been captured as depicted in the equivalent plastic strain plot.
Resumo:
This study aimed to investigate the spatial clustering and dynamic dispersion of dengue incidence in Queensland, Australia. We used Moran’s I statistic to assess the spatial autocorrelation of reported dengue cases. Spatial empirical Bayes smoothing estimates were used to display the spatial distribution of dengue in postal areas throughout Queensland. Local indicators of spatial association (LISA) maps and logistic regression models were used to identify spatial clusters and examine the spatio-temporal patterns of the spread of dengue. The results indicate that the spatial distribution of dengue was clustered during each of the three periods of 1993–1996, 1997–2000 and 2001–2004. The high-incidence clusters of dengue were primarily concentrated in the north of Queensland and low-incidence clusters occurred in the south-east of Queensland. The study concludes that the geographical range of notified dengue cases has significantly expanded in Queensland over recent years.
Resumo:
The Airy stress function, although frequently employed in classical linear elasticity, does not receive similar usage for granular media problems. For plane strain quasi-static deformations of a cohesionless Coulomb–Mohr granular solid, a single nonlinear partial differential equation is formulated for the Airy stress function by combining the equilibrium equations with the yield condition. This has certain advantages from the usual approach, in which two stress invariants and a stress angle are introduced, and a system of two partial differential equations is needed to describe the flow. In the present study, the symmetry analysis of differential equations is utilised for our single partial differential equation, and by computing an optimal system of one-dimensional Lie algebras, a complete set of group-invariant solutions is derived. By this it is meant that any group-invariant solution of the governing partial differential equation (provided it can be derived via the classical symmetries method) may be obtained as a member of this set by a suitable group transformation. For general values of the parameters (angle of internal friction and gravity g) it is found there are three distinct classes of solutions which correspond to granular flows considered previously in the literature. For the two limiting cases of high angle of internal friction and zero gravity, the governing partial differential equation admit larger families of Lie point symmetries, and from these symmetries, further solutions are derived, many of which are new. Furthermore, the majority of these solutions are exact, which is rare for granular flow, especially in the case of gravity driven flows.
Resumo:
If the trade union movement is to remain an influential force in the industrial, economic and socio/political arenas of industrialised nations it is vital that its recruitment of young members improve dramatically. Australian union membership levels have declined markedly over the last three decades and youth union membership levels have decreased more than any age group. Currently around 10% of young workers aged between 16-24 years are members of unions in Australia compared to 26% of workers aged 45-58 (Oliver, 2008). This decline has occurred throughout the union movement, in all states and in almost all industries and occupations. This research, which consists of interviews with union organisers and union officials, draws on perspectives from the labour geography literature to explore how union personnel located in various places, spaces and scales construct the issue of declining youth union membership. It explores the scale of connections within the labour movement and the extent to which these connections are leveraged to address the problem of youth union membership decline. To offer the reader a sense of context and perspective, the thesis firstly outlines the historical development of the union movement. It also reviews the literature on youth membership decline. Labour geography offers a rich and apposite analytical tool for investigation of this area. The notion of ‘scale’ as a dynamic, interactive, constructed and reconstructed entity (Ellem, 2006) is an appropriate lens for viewing youth-union membership issues. In this non-linear view, scale is a relational element which interplays with space, place and the environment (Howett, in Marston, 2000) rather than being ‘sequential’ and hierarchical. Importantly, the thesis investigates the notion of unions as ‘spaces of dependence’ (Cox, 1998a, p.2), organisations whose space is centred upon realising essential interests. It also considers the quality of unions’ interactions with others – their ‘spaces of engagement‘(Cox, 1998a, p.2), and the impact that this has upon their ability to recruit youth. The findings reveal that most respondents across the spectrum of the union movement attribute the decline in youth membership levels to factors external to the movement itself, such as changes to industrial relations legislation and the impact of globalisation on employment markets. However, participants also attribute responsibility for declining membership levels to the union movement itself, citing factors such as a lack of resourcing and a need to change unions’ perceived identity and methods of operation. The research further determined that networks of connections across the union movement are tenuous and, to date, are not being fully utilised to assist unions to overcome the youth recruitment dilemma. The study concludes that potential connections between unions are hampered by poor resourcing, workload issues and some deeply entrenched attitudes related to unions ‘defending (and maintaining) their patch’.