856 resultados para Unified Formulation
Resumo:
Presents a unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve. We aim at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results. A formal approach for solving this problem relies on several practical metrics, which is introduced herein. Their choice is important, as the comparison results between controllers can vary significantly, depending on the selected criterion. Apart from the quantitative assessment, we also raise aspects which are difficult to quantify, but which must stay in attention when considering the position control problem for this class of hydraulic servo systems.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
Over recent years, Unmanned Air Vehicles or UAVs have become a powerful tool for reconnaissance and surveillance tasks. These vehicles are now available in a broad size and capability range and are intended to fly in regions where the presence of onboard human pilots is either too risky or unnecessary. This paper describes the formulation and application of a design framework that supports the complex task of multidisciplinary design optimisation of UAVs systems via evolutionary computation. The framework includes a Graphical User Interface (GUI), a robust Evolutionary Algorithm optimiser named HAPEA, several design modules, mesh generators and post-processing capabilities in an integrated platform. These population –based algorithms such as EAs are good for cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise, and also problems where we look for multiple solutions via Game Theory, namely a Nash equilibrium point or a Pareto set of non-dominated solutions. The application of the methodology is illustrated on conceptual and detailed multi-criteria and multidisciplinary shape design problems. Results indicate the practicality and robustness of the framework to find optimal shapes and trade—offs between the disciplinary analyses and to produce a set of non dominated solutions of an optimal Pareto front to the designer.
Resumo:
Recently, the numerical modelling and simulation for anomalous subdiffusion equation (ASDE), which is a type of fractional partial differential equation( FPDE) and has been found with widely applications in modern engineering and sciences, are attracting more and more attentions. The current dominant numerical method for modelling ASDE is Finite Difference Method (FDM), which is based on a pre-defined grid leading to inherited issues or shortcomings. This paper aims to develop an implicit meshless approach based on the radial basis functions (RBF) for numerical simulation of the non-linear ASDE. The discrete system of equations is obtained by using the meshless shape functions and the strong-forms. The stability and convergence of this meshless approach are then discussed and theoretically proven. Several numerical examples with different problem domains are used to validate and investigate accuracy and efficiency of the newly developed meshless formulation. The results obtained by the meshless formulations are also compared with those obtained by FDM in terms of their accuracy and efficiency. It is concluded that the present meshless formulation is very effective for the modeling and simulation of the ASDE. Therefore, the meshless technique should have good potential in development of a robust simulation tool for problems in engineering and science which are governed by the various types of fractional differential equations.
Resumo:
The large deformation analysis is one of major challenges in numerical modelling and simulation of metal forming. Because no mesh is used, the meshfree methods show good potential for the large deformation analysis. In this paper, a local meshfree formulation, based on the local weak-forms and the updated Lagrangian (UL) approach, is developed for the large deformation analysis. To fully employ the advantages of meshfree methods, a simple and effective adaptive technique is proposed, and this procedure is much easier than the re-meshing in FEM. Numerical examples of large deformation analysis are presented to demonstrate the effectiveness of the newly developed nonlinear meshfree approach. It has been found that the developed meshfree technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming.
Resumo:
In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.
Resumo:
The paper compares three different methods of inclusion of current phasor measurements by phasor measurement units (PMUs) in the conventional power system state estimator. For each of the three methods, comprehensive formulation of the hybrid state estimator in the presence of conventional and PMU measurements is presented. The performance of the state estimator in the presence of conventional measurements and optimally placed PMUs is evaluated in terms of convergence characteristics and estimator accuracy. Test results on the IEEE 14-bus and IEEE 300-bus systems are analyzed to determine the best possible method of inclusion of PMU current phasor measurements.
Resumo:
Current rapid increases in the scope of regional development and the reach of technology have combined with the expanding scale of modern settlements to focus growing attention on infrastructure provisionneeds. This has included organisational and funding systems, the management of new technologies and regional scale social provisions. In this chapter, the evolution of urban and regional infrastructure is traced from its earliest origins in the growth of organized societies of 5 ,000 years ago. Infrastructure needs and provision are illustrated for the arenas of metropolitan, provincial and rural regions. Rural infrastructure examples and lessons are drawn from global case studies. Recent expansions of the scope of infrastructure are examined and issues of governance and process discussed. Phased planning processes are related to cycles of program adoption, objective formulation, option evaluation and programme budgeting. Issues of privatisation and public interest are considered. Matters of contemporary global significance are explored, including the current economic contraction and the effects of global climate change. Conclusions are drawn about the role and importance of linking regional planning to coherent regional infrastructure programs and budgets
Resumo:
While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
Streptococcus pyogenes, also known as Group A Streptococcus (GAS) has been associated with a range of diseases from the mild pharyngitis and pyoderma to more severe invasive infections such as streptococcal toxic shock. GAS also causes a number of non-suppurative post-infectious diseases such as rheumatic fever, rheumatic heart disease and glomerulonephritis. The large extent of GAS disease burden necessitates the need for a prophylactic vaccine that could target the diverse GAS emm types circulating globally. Anti-GAS vaccine strategies have focused primarily on the GAS M-protein, an extracellular virulence factor anchored to GAS cell wall. As opposed to the hypervariable N-terminal region, the C-terminal portion of the protein is highly conserved among different GAS emm types and is the focus of a leading GAS vaccine candidate, J8-DT/alum. The vaccine candidate J8-DT/alum was shown to be immunogenic in mice, rabbits and the non-human primates, hamadryas baboons. Similar responses to J8-DT/alum were observed after subcutaneous and intramuscular immunization with J8-DT/alum, in mice and in rabbits. Further assessment of parameters that may influence the immunogenicity of J8-DT demonstrated that the immune responses were identical in male and female mice and the use of alum as an adjuvant in the vaccine formulation significantly increased its immunogenicity, resulting in a long-lived serum IgG response. Contrary to the previous findings, the data in this thesis indicates that a primary immunization with J8-DT/alum (50ƒÊg) followed by a single boost is sufficient to generate a robust immune response in mice. As expected, the IgG response to J8- DT/alum was a Th2 type response consisting predominantly of the isotype IgG1 accompanied by lower levels of IgG2a. Intramuscular vaccination of rabbits with J8-DT/alum demonstrated that an increase in the dose of J8-DT/alum up to 500ƒÊg does not have an impact on the serum IgG titers achieved. Similar to the immune response in mice, immunization with J8-DT/alum in baboons also established that a 60ƒÊg dose compared to either 30ƒÊg or 120ƒÊg was sufficient to generate a robust immune response. Interestingly, mucosal infection of naive baboons with a M1 GAS strain did not induce a J8-specific serum IgG response. As J8-DT/alum mediated protection has been previously reported to be due to the J8- specific antibody formed, the efficacy of J8-DT antibodies was determined in vitro and in vivo. In vitro opsonization and in vivo passive transfer confirmed the protective potential of J8-DT antibodies. A reduction in the bacterial burden after challenge with a bioluminescent M49 GAS strain in mice that were passively administered J8-DT IgG established that protection due to J8-DT was mediated by antibodies. The GAS burden in infected mice was monitored using bioluminescent imaging in addition to traditional CFU assays. Bioluminescent GAS strains including the ‘rheumatogenic’ M1 GAS could not be generated due to limitations with transformation of GAS, however, a M49 GAS strain was utilized during BLI. The M49 serotype is traditionally a ‘nephritogenic’ serotype associated with post-streptococcal glomerulonephritis. Anti- J8-DT antibodies now have been shown to be protective against multiple GAS strains such as M49 and M1. This study evaluated the immunogenicity of J8-DT/alum in different species of experimental animals in preparation for phase I human clinical trials and provided the ground work for the development of a rapid non-invasive assay for evaluation of vaccine candidates.
Resumo:
This thesis explores a way to inform the architectural design process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an exclusively Australian case study of a network enterprise comprised of collaborative, yet independent business entities. The internet revolution, substantial economic and cultural shifts, and an increased emphasis on lifestyle considerations have prompted a radical re-ordering of organisational relationships and the associated structures, processes, and places of doing business. The social milieu of the information age and the knowledge economy is characterised by an almost instantaneous flow of information and capital. This has culminated in a phenomenon termed by Manuel Castells as the network society, where physical locations are joined together by continuous communication and virtual connectivity. A new spatial logic encompassing redefined concepts of space and distance, and requiring a comprehensive shift in the approach to designing workplace environments for today’s adaptive, collaborative organisations in a dynamic business world, provides the backdrop for this research. Within the duality of space and an augmentation of the traditional notions of place, organisational and institutional structures pose new challenges for the design professions. The literature revealed that there has always been a mono-organisational focus in relation to workplace design strategies. The phenomenon of inter-organisational collaboration has enabled the identification of a gap in the knowledge relative to workplace design. This new context generated the formulation of a unique research construct, the NetWorkPlace™©, which captures the complexity of contemporary employment structures embracing both physical and virtual work environments and practices, and provided the basis for investigating the factors that are shaping and defining interactions within and across networked organisational settings. The methodological orientation and the methods employed follow a qualitative approach and an abductively driven strategy comprising two distinct components, a cross-sectional study of the whole of the network and a longitudinal study, focusing on a single discrete workplace site. The complexity of the context encountered dictated that a multi-dimensional investigative framework was required to be devised. The adoption of a pluralist ontology and the reconfiguration of approaches from traditional paradigms into a collaborative, trans-disciplinary, multi-method epistemology provided an explicit and replicatable method of investigation. The identification and introduction of the NetWorkPlace™© phenomenon, by necessity, spans a number of traditional disciplinary boundaries. Results confirm that in this context, architectural research, and by extension architectural practice, must engage with what other disciplines have to offer. The research concludes that no single disciplinary approach to either research or practice in this area of design can suffice. Pierre Bourdieau’s philosophy of ‘practice’ provides a framework within which the governance and technology structures, together with the mechanisms enabling the production of social order in this context, can be understood. This is achieved by applying the concepts of position and positioning to the corporate power dynamics, and integrating the conflict found to exist between enterprise standard and ferally conceived technology systems. By extending existing theory and conceptions of ‘place’ and the ‘person-environment relationship’, relevant understandings of the tensions created between Castells’ notions of the space of place and the space of flows are established. The trans-disciplinary approach adopted, and underpinned by a robust academic and practical framework, illustrates the potential for expanding the range and richness of understanding applicable to design in this context. The outcome informs workplace design by extending theoretical horizons, and by the development of a comprehensive investigative process comprising a suite of models and techniques for both architectural and interior design research and practice, collectively entitled the NetWorkPlace™© Application Framework. This work contributes to the body of knowledge within the design disciplines in substantive, theoretical, and methodological terms, whilst potentially also influencing future organisational network theories, management practices, and information and communication technology applications. The NetWorkPlace™© as reported in this thesis, constitutes a multi-dimensional concept having the capacity to deal with the fluidity and ambiguity characteristic of the network context, as both a topic of research and the way of going about it.
Resumo:
During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.
Resumo:
Throughout the twentieth century increased interest in the training of actors resulted in the emergence of a plethora of acting theories and innovative theatrical movements in Europe, the UK and the USA. The individuals or groups involved with the formulation of these theories and movements developed specific terminologies, or languages of acting, in an attempt to clearly articulate the nature and the practice of acting according to their particular pedagogy or theatrical aesthetic. Now at the dawning of the twenty-first century, Australia boasts quite a number of schools and university courses professing to train actors. This research aims to discover the language used in actor training on the east coast of Australia today. Using interviews with staff of the National Institute of Dramatic Art, the Victorian College of the Arts, and the Queensland University of Technology as the primary source of data, a constructivist grounded theory has emerged to assess the influence of last century‟s theatrical theorists and practitioners on Australian training and to ascertain the possibility of a distinctly Australian language of acting.
Resumo:
Data breach notification laws require organisations to notify affected persons or regulatory authorities when an unauthorised acquisition of personal data occurs. Most laws provide a safe harbour to this obligation if acquired data has been encrypted. There are three types of safe harbour: an exemption; a rebuttable presumption and factor-based analysis. We demonstrate, using three condition-based scenarios, that the broad formulation of most encryption safe harbours is based on the flawed assumption that encryption is the silver bullet for personal information protection. We then contend that reliance upon an encryption safe harbour should be dependent upon a rigorous and competent risk-based review that is required on a case-by-case basis. Finally, we recommend the use of both an encryption safe harbour and a notification trigger as our preferred choice for a data breach notification regulatory framework.