939 resultados para FREQUENCY APPROACH
Resumo:
Improved public awareness and strong sentiments towards environmental issues will continue to create increasing demand for sustainable housing (SH) in the coming years. Despite this potential, the up-take rate of sustainable housing in new build and through home renovation is not as high as expected within the housing industry. This is in contrast to the influx of emerging building technologies, new materials and innovative designs seen in exemplar homes built worldwide. How we should use the increasing awareness of SH and emerging technologies as an impetus to change the un-sustainable designs and practices of the building industry is high on the agenda of the government and majority of the stakeholders involved. This warrants the study of multifaceted strategies that meet the needs of multiple stakeholders and integrated seamlessly into housing development processes. Specifically, the different perceptions, roles and incentives of stakeholders, who inevitably need to ensure their benefits and commercial returns, should be highlighted and acted upon. ----- This paper discusses the preliminary findings of a research project that aims to promote SH implementation by identifying and materializing the mutual benefits among key stakeholders. The aim is to be achieved through questionnaire surveys, structural equation modelling, interviews and case studies with seven major stakeholders within the Australian housing industry. This research identifies the influence and relationship of relevant factors, investigates preferences, similarities and differences between stakeholders on perceived benefits and in turn explores the mutual-benefit strategy package that facilitates decision making towards sustainable housing development.
Resumo:
The load–frequency control (LFC) problem has been one of the major subjects in a power system. In practice, LFC systems use proportional–integral (PI) controllers. However since these controllers are designed using a linear model, the non-linearities of the system are not accounted for and they are incapable of gaining good dynamical performance for a wide range of operating conditions in a multi-area power system. A strategy for solving this problem because of the distributed nature of a multi-area power system is presented by using a multi-agent reinforcement learning (MARL) approach. It consists of two agents in each power area; the estimator agent provides the area control error (ACE) signal based on the frequency bias estimation and the controller agent uses reinforcement learning to control the power system in which genetic algorithm optimisation is used to tune its parameters. This method does not depend on any knowledge of the system and it admits considerable flexibility in defining the control objective. Also, by finding the ACE signal based on the frequency bias estimation the LFC performance is improved and by using the MARL parallel, computation is realised, leading to a high degree of scalability. Here, to illustrate the accuracy of the proposed approach, a three-area power system example is given with two scenarios.
Resumo:
A Positive Buck-Boost converter is a known DC-DC converter which may be controlled to act as Buck or Boost converter with same polarity of the input voltage. This converter has four switching states which include all the switching states of the above mentioned DC-DC converters. In addition there is one switching state which provides a degree of freedom for the positive Buck-Boost converter in comparison to the Buck, Boost, and inverting Buck-Boost converters. In other words the Positive Buck-Boost Converter shows a higher level of flexibility for its inductor current control compared to the other DC-DC converters. In this paper this extra degree of freedom is utilised to increase the robustness against input voltage fluctuations and load changes. To address this capacity of the positive Buck-Boost converter, two different control strategies are proposed which control the inductor current and output voltage against any fluctuations in input voltage and load changes. Mathematical analysis for dynamic and steady state conditions are presented in this paper and simulation results verify the proposed method.
Resumo:
This article assesses the 'Managing Diversity' (MD) approach in Australia, examining its drivers, discussing its relationship to legislation designed to promote equity, and examining it as a set of management practices. It has been plausibly argued, on efficiency grounds, that responsibility for achieving equality objectives must be shifted to organisations as this links contextual conditions to organisational processes. However, even where there is some prescription and guidance such as that provided by Australian Equal Employment Opportunity (EEO) legislation targeted specifically to women employees, both practice and outcomes are variable. This is even more the case with MD where there are no guiding principles or legislative support. The article examines the best practice EEO and MD programs of Australian organisations to demonstrate the approaches and programs that are being developed at the workplace and to highlight the limitations of the 'business case' approach underlying such programs.
Resumo:
Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.
Resumo:
Background: There is a sound rationale for the population-based approach to falls injury prevention but there is currently insufficient evidence to advise governments and communities on how they can use population-based strategies to achieve desired reductions in the burden of falls-related injury.---------- Aim: To quantify the effectiveness of a streamlined (and thus potentially sustainable and cost-effective), population-based, multi-factorial falls injury prevention program for people over 60 years of age.---------- Methods: Population-based falls-prevention interventions were conducted at two geographically-defined and separate Australian sites: Wide Bay, Queensland, and Northern Rivers, NSW. Changes in the prevalence of key risk factors and changes in rates of injury outcomes within each community were compared before and after program implementation and changes in rates of injury outcomes in each community were also compared with the rates in their respective States.---------- Results: The interventions in neither community substantially decreased the rate of falls-related injury among people aged 60 years or older, although there was some evidence of reductions in occurrence of multiple falls reported by women. In addition, there was some indication of improvements in fall-related risk factors, but the magnitudes were generally modest.---------- Conclusion: The evidence suggests that low intensity population-based falls prevention programs may not be as effective as those are intensively implemented.
Resumo:
Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.
Resumo:
A configurable process model describes a family of similar process models in a given domain. Such a model can be configured to obtain a specific process model that is subsequently used to handle individual cases, for instance, to process customer orders. Process configuration is notoriously difficult as there may be all kinds of interdependencies between configuration decisions.} In fact, an incorrect configuration may lead to behavioral issues such as deadlocks and livelocks. To address this problem, we present a novel verification approach inspired by the ``operating guidelines'' used for partner synthesis. We view the configuration process as an external service, and compute a characterization of all such services which meet particular requirements using the notion of configuration guideline. As a result, we can characterize all feasible configurations (i.\,e., configurations without behavioral problems) at design time, instead of repeatedly checking each individual configuration while configuring a process model.
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.
Resumo:
Wideband frequency synthesisers have application in many areas, including test instrumentation and defence electronics. Miniaturisation of these devices provides many advantages to system designers, particularly in applications where extra space and weight are expensive. The purpose of this project was to miniaturise a wideband frequency synthesiser and package it for operation in several different environmental conditions while satisfying demanding technical specifications. The four primary and secondary goals to be achieved were: 1. an operating frequency range from low MHz to greater than 40 GHz, with resolution better than 1 MHz, 2. typical RF output power of +10 dBm, with maximum DC supply of 15 W, 3. synthesiser package of only 150 100 30 mm, and 4. operating temperatures from 20C to +71C, and vibration levels over 7 grms. This task was approached from multiple angles. Electrically, the system is designed to have as few functional blocks as possible. Off the shelf components are used for active functions instead of customised circuits. Mechanically, the synthesiser package is designed for efficient use of the available space. Two identical prototype synthesisers were manufactured to evaluate the design methodology and to show the repeatability of the design. Although further engineering development will improve the synthesiser’s performance, this project has successfully demonstrated a level of miniaturisation which sets a new benchmark for wideband synthesiser design. These synthesisers will meet the demands for smaller, lighter wideband sources. Potential applications include portable test equipment, radar and electronic surveillance systems on unmanned aerial vehicles. They are also useful for reducing the overall weight and power consumption of other systems, even if small dimensions are not essential.
Resumo:
Knowledge has been recognised as a source of competitive advantage. Knowledge-based resources allow organisations to adapt products and services to the marketplace and deal with competitive challenges that enable them to compete more effectively. One factor critical to using knowledge-based resources is the ability to transfer knowledge as a dimension of the learning organisation. There are many elements that may influence whether knowledge transfer can be effectively achieved in an organisation such as leadership, problem-solving behaviours, support structures, change management capabilities, absorptive capacity and the nature of the knowledge. An existing framework was applied in a case study to explain how knowledge transfer can be managed effectively and to identify emerging issues or additional factors involved in the process. As a result, a refined framework is proposed that provides a better understanding for the effective management of knowledge transfer processes that can provide a competitive advantage.
Resumo:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Resumo:
Ecological problems are typically multi faceted and need to be addressed from a scientific and a management perspective. There is a wealth of modelling and simulation software available, each designed to address a particular aspect of the issue of concern. Choosing the appropriate tool, making sense of the disparate outputs, and taking decisions when little or no empirical data is available, are everyday challenges facing the ecologist and environmental manager. Bayesian Networks provide a statistical modelling framework that enables analysis and integration of information in its own right as well as integration of a variety of models addressing different aspects of a common overall problem. There has been increased interest in the use of BNs to model environmental systems and issues of concern. However, the development of more sophisticated BNs, utilising dynamic and object oriented (OO) features, is still at the frontier of ecological research. Such features are particularly appealing in an ecological context, since the underlying facts are often spatial and temporal in nature. This thesis focuses on an integrated BN approach which facilitates OO modelling. Our research devises a new heuristic method, the Iterative Bayesian Network Development Cycle (IBNDC), for the development of BN models within a multi-field and multi-expert context. Expert elicitation is a popular method used to quantify BNs when data is sparse, but expert knowledge is abundant. The resulting BNs need to be substantiated and validated taking this uncertainty into account. Our research demonstrates the application of the IBNDC approach to support these aspects of BN modelling. The complex nature of environmental issues makes them ideal case studies for the proposed integrated approach to modelling. Moreover, they lend themselves to a series of integrated sub-networks describing different scientific components, combining scientific and management perspectives, or pooling similar contributions developed in different locations by different research groups. In southern Africa the two largest free-ranging cheetah (Acinonyx jubatus) populations are in Namibia and Botswana, where the majority of cheetahs are located outside protected areas. Consequently, cheetah conservation in these two countries is focussed primarily on the free-ranging populations as well as the mitigation of conflict between humans and cheetahs. In contrast, in neighbouring South Africa, the majority of cheetahs are found in fenced reserves. Nonetheless, conflict between humans and cheetahs remains an issue here. Conservation effort in South Africa is also focussed on managing the geographically isolated cheetah populations as one large meta-population. Relocation is one option among a suite of tools used to resolve human-cheetah conflict in southern Africa. Successfully relocating captured problem cheetahs, and maintaining a viable free-ranging cheetah population, are two environmental issues in cheetah conservation forming the first case study in this thesis. The second case study involves the initiation of blooms of Lyngbya majuscula, a blue-green algae, in Deception Bay, Australia. L. majuscula is a toxic algal bloom which has severe health, ecological and economic impacts on the community located in the vicinity of this algal bloom. Deception Bay is an important tourist destination with its proximity to Brisbane, Australia’s third largest city. Lyngbya is one of several algae considered to be a Harmful Algal Bloom (HAB). This group of algae includes other widespread blooms such as red tides. The occurrence of Lyngbya blooms is not a local phenomenon, but blooms of this toxic weed occur in coastal waters worldwide. With the increase in frequency and extent of these HAB blooms, it is important to gain a better understanding of the underlying factors contributing to the initiation and sustenance of these blooms. This knowledge will contribute to better management practices and the identification of those management actions which could prevent or diminish the severity of these blooms.
Resumo:
Discharge planning has become increasingly important, with current trends toward shorter hospital stays, increased health care costs, and more community-based health services. Effective discharge planning ensures the safety and ongoing care for patients,1 and it also benefits health care providers and organizations. It results in shorter hospital stays, fewer readmissions, higher access rates to post-hospitalization services, greater patient satisfaction with the discharge, and improved quality of life and continuity of care.[2] and [3] All acute care patients and their caregivers require some degree of preparation for discharge home—education about their health status, risks, and treatment; help setting health goals and maintaining a good level of self-care; information about community resources; and follow-up appointments and referrals to appropriate community health providers. Inadequate preparation exposes the patient to unnecessary risks of recurrence or complications of the acute complaint, neglect of nonacute comorbidities, mismanagement and side effects of medication, disruption of family and social life, emotional distress, and financial loss.[2], [3] and [4] The result may be re-presentation to the emergency department. It is noteworthy that up to 18% of ED presentations are revisits within 72 hours of the original visit5; many of these are considered preventable.6 It is a primary responsibility of nurses to ensure that patients return to the community adequately prepared and with appropriate support in place. Up to 65% of ED patients are discharged home from the emergency department,7 and the characteristics of the emergency department and its patient population make the provision of a high standard of discharge planning uniquely difficult. In addition, discharge planning is neglected in contemporary emergency nursing—there are no monographs devoted to the subject, and there is little published research. In this article 3 issues are explored: the importance of emergency nurses’ participation in the discharge-planning process, impediments to their participation; and strategies to improve discharge planning in the emergency department.