954 resultados para appropriate technology


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Clinical experience plays an important role in the development of expertise, particularly when coupled with reflection on practice. There is debate, however, regarding the amount of clinical experience that is required to become an expert. Various lengths of practice have been suggested as suitable for determining expertise, ranging from five years to 15 years. This study aimed to investigate the association between length of experience and therapists’ level of expertise in the field of cerebral palsy with upper limb hypertonicity using an empirical procedure named Cochrane–Weiss–Shanteau (CWS). The methodology involved re-analysis of quantitative data collected in two previous studies. In Study 1, 18 experienced occupational therapists made hypothetical clinical decisions related to 110 case vignettes, while in Study 2, 29 therapists considered 60 case vignettes drawn randomly from those used in Study 1. A CWS index was calculated for each participant's case decisions. Then, in each study, Spearman's rho was calculated to identify the correlations between the duration of experience and level of expertise. There was no significant association between these two variables in both studies. These analyses corroborated previous findings of no association between length of experience and judgemental performance. Therefore, length of experience may not be an appropriate criterion for determining level of expertise in relation to cerebral palsy practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement. ---------------- The Port currently has a network of stormwater sample collection points where event based samples together with grab samples are tested for a range of water quality parameters. Whilst this information provides a ‘snapshot’ of the pollutants being washed from the catchment/s, it does not allow for a quantifiable assessment of total contaminant loads being discharged to the waters of Moreton Bay. It also does not represent pollutant build-up and wash-off from the different land uses across a broader range of rainfall events which might be expected. As such, it is difficult to relate stormwater quality to different pollutant sources within the Port environment. ----------------- Consequently, this would make the source tracking of pollutants to receiving waters extremely difficult and in turn the ability to implement appropriate mitigation measures. Also, without this detailed understanding, the efficacy of the various stormwater quality mitigation measures implemented cannot be determined with certainty. --------------- Current knowledge on port stormwater runoff quality Currently, little knowledge exists with regards to the pollutant generation capacity specific to port land uses as these do not necessarily compare well with conventional urban industrial or commercial land use due to the specific nature of port activities such as inter-modal operations and cargo management. Furthermore, traffic characteristics in a port area are different to a conventional urban area. Consequently, as data inputs based on an industrial and commercial land uses for modelling purposes is questionable. ------------------ A comprehensive review of published research failed to locate any investigations undertaken with regards to pollutant build-up and wash-off for port specific land uses. Furthermore, there is very limited information made available by various ports worldwide about the pollution generation potential of their facilities. Published work in this area has essentially focussed on the water quality or environmental values in the receiving waters such as the downstream bay or estuary. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is the undertaking of ‘cutting edge’ research to strengthen the environmental custodianship of the Port area. This project aims to develop a port specific stormwater quality model to allow informed decision making in relation to stormwater quality improvement in the context of the increased growth of the Port. --------------- Stage 1 of the research project focussed on the assessment of pollutant build-up and wash-off using rainfall simulation from the current Port of Brisbane facilities with the longer-term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. Investigation of complex processes such as pollutant wash-off using naturally occurring rainfall events has inherent difficulties. These can be overcome using simulated rainfall for the investigations. ----------------- The deliverables for Stage 1 included the following: * Pollutant build-up and wash-off profiles for six primary land uses within the Port of Brisbane to be used for water quality model development. * Recommendations with regards to future stormwater quality monitoring and pollution mitigation measures. The outcomes are expected to deliver the following benefits to the Port of Brisbane: * The availability of Port specific pollutant build-up and wash-off data will enable the implementation of customised stormwater pollution mitigation strategies. * The water quality data collected would form the baseline data for a Port specific water quality model for mitigation and predictive purposes. * To be at the cutting-edge in terms of water quality management and environmental best practice in the context of port infrastructure. ---------------- Conclusions: The important conclusions from the study are: * It confirmed that the Port environment is unique in terms of pollutant characteristics and is not comparable to typical urban land uses. * For most pollutant types, the Port land uses exhibited lower pollutant concentrations when compared to typical urban land uses. * The pollutant characteristics varied across the different land uses and were not consistent in terms of the land use. Hence, the implementation of stereotypical structural water quality improvement devices could be of limited value. * The <150m particle size range was predominant in suspended solids for pollutant build-up as well as wash-off. Therefore, if suspended solids are targeted as the surrogate parameter for water quality improvement, this specific particle size range needs to be removed. ------------------- Recommendations: Based on the study results the following preliminary recommendations are made: * Due to the appreciable variation in pollutant characteristics for different port land uses, any water quality monitoring stations should preferably be located such that source areas can be easily identified. * The study results having identified significant pollutants for the different land uses should enable the development of a more customised water quality monitoring and testing regime targeting the critical pollutants. * A ‘one size fits all’ approach may not be appropriate for the different port land uses due to the varying pollutant characteristics. As such, pollution mitigation will need to be specifically tailored to suit the specific land use. * Any structural measures implemented for pollution mitigation to be effective should have the capability to remove suspended solids of size <150m. * Based on the results presented and the particularly the fact that the Port land uses cannot be compared to conventional urban land uses in relation to pollutant generation, consideration should be given to the development of a port specific water quality model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fermentation feedstocks in the sugar industry are based on cane juice, B molasses or final molasses. Brazil has been producing ethanol by directing sugarcane juice to fermentation directly or using lower quality juice as a diluent with B molasses to prepare the fermentation broth. One issue that has received only limited interest particularly from outside Brazil is the most appropriate conditions for clarification of the juice going to fermentation. Irrespective of whether the juice supply is the total flow from the milling tandem or a diffuser station or a part of the total flow, removal of the insoluble solids is essential. However, the standard defecation process used by sugar factories around the world to clarify juice can introduce unwanted calcium ions and remove other nutrients such as phosphorus and nitrogen that are considered essential for the fermentation process. An investigation was undertaken by SRI to assess the effects on the constituents of cane juice when subjected to the typical clarification process in an Australian factory and what conditions would be needed to provide a clarified juice suitable for fermentation. Typical juices from one factory were clarified in laboratory trials under a range of pH conditions and the resulting clarified juices analysed. The results indicated that pH had a major effect on the residual concentrations of key constituents in the clarified juice and that the selected clarification conditions are determined by the nominated quality criteria of clarified juice feedstock for fermentation. Further trials were conducted in overseas factories to confirm the results obtained in Australia. It became apparent that the preferred specifications for clarified juice going to fermentation varied from country to country. Each supplier of fermentation technology had criteria applying to clarified juice feedstock that would have a major impact on the standard of clarification required to achieve compliance with the criteria.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the relationship between air pollution and human health has been investigated utilising Geographic Information System (GIS) as an analysis tool. The research focused on how vehicular air pollution affects human health. The main objective of this study was to analyse the spatial variability of pollutants, taking Brisbane City in Australia as a case study, by the identification of the areas of high concentration of air pollutants and their relationship with the numbers of death caused by air pollutants. A correlation test was performed to establish the relationship between air pollution, number of deaths from respiratory disease, and total distance travelled by road vehicles in Brisbane. GIS was utilized to investigate the spatial distribution of the air pollutants. The main finding of this research is the comparison between spatial and non-spatial analysis approaches, which indicated that correlation analysis and simple buffer analysis of GIS using the average levels of air pollutants from a single monitoring station or by group of few monitoring stations is a relatively simple method for assessing the health effects of air pollution. There was a significant positive correlation between variable under consideration, and the research shows a decreasing trend of concentration of nitrogen dioxide at the Eagle Farm and Springwood sites and an increasing trend at CBD site. Statistical analysis shows that there exists a positive relationship between the level of emission and number of deaths, though the impact is not uniform as certain sections of the population are more vulnerable to exposure. Further statistical tests found that the elderly people of over 75 years age and children between 0-15 years of age are the more vulnerable people exposed to air pollution. A non-spatial approach alone may be insufficient for an appropriate evaluation of the impact of air pollutant variables and their inter-relationships. It is important to evaluate the spatial features of air pollutants before modeling the air pollution-health relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an overview of technical solutions for regional area precise GNSS positioning services such as in Queensland. The research focuses on the technical and business issues that currently constrain GPS-based local area Real Time Kinematic (RTK) precise positioning services so as to operate in future across larger regional areas, and therefore support services in agriculture, mining, utilities, surveying, construction, and others. The paper first outlines an overall technical framework that has been proposed to transition the current RTK services to future larger scale coverage. The framework enables mixed use of different reference GNSS receiver types, dual- or triple-frequency, single or multiple systems, to provide RTK correction services to users equipped with any type of GNSS receivers. Next, data processing algorithms appropriate for triple-frequency GNSS signals are reviewed and some key performance benefits of using triple carrier signals for reliable RTK positioning over long distances are demonstrated. A server-based RTK software platform is being developed to allow for user positioning computations at server nodes instead of on the user's device. An optimal deployment scheme for reference stations across a larger-scale network has been suggested, given restrictions such as inter-station distances, candidates for reference locations, and operational modes. For instance, inter-station distances between triple-frequency receivers can be extended to 150km, which doubles the distance between dual-frequency receivers in the existing RTK network designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Home Automation (HA) has emerged as a prominent ¯eld for researchers and in- vestors confronting the challenge of penetrating the average home user market with products and services emerging from technology based vision. In spite of many technology contri- butions, there is a latent demand for a®ordable and pragmatic assistive technologies for pro-active handling of complex lifestyle related problems faced by home users. This study has pioneered to develop an Initial Technology Roadmap for HA (ITRHA) that formulates a need based vision of 10-15 years, identifying market, product and technology investment opportunities, focusing on those aspects of HA contributing to e±cient management of home and personal life. The concept of Family Life Cycle is developed to understand the temporal needs of family. In order to formally describe a coherent set of family processes, their relationships, and interaction with external elements, a reference model named Fam- ily System is established that identi¯es External Entities, 7 major Family Processes, and 7 subsystems-Finance, Meals, Health, Education, Career, Housing, and Socialisation. Anal- ysis of these subsystems reveals Soft, Hard and Hybrid processes. Rectifying the lack of formal methods for eliciting future user requirements and reassessing evolving market needs, this study has developed a novel method called Requirement Elicitation of Future Users by Systems Scenario (REFUSS), integrating process modelling, and scenario technique within the framework of roadmapping. The REFUSS is used to systematically derive process au- tomation needs relating the process knowledge to future user characteristics identi¯ed from scenarios created to visualise di®erent futures with richly detailed information on lifestyle trends thus enabling learning about the future requirements. Revealing an addressable market size estimate of billions of dollars per annum this research has developed innovative ideas on software based products including Document Management Systems facilitating automated collection, easy retrieval of all documents, In- formation Management System automating information services and Ubiquitous Intelligent System empowering the highly mobile home users with ambient intelligence. Other product ideas include robotic devices of versatile Kitchen Hand and Cleaner Arm that can be time saving. Materialisation of these products require technology investment initiating further research in areas of data extraction, and information integration as well as manipulation and perception, sensor actuator system, tactile sensing, odour detection, and robotic controller. This study recommends new policies on electronic data delivery from service providers as well as new standards on XML based document structure and format.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scientific discoveries, developments in medicine and health issues are the constant focus of media attention and the principles surrounding the creation of so called ‘saviour siblings’ are of no exception. The development in the field of reproductive techniques has provided the ability to genetically analyse embryos created in the laboratory to enable parents to implant selected embryos to create a tissue-matched child who may be able to cure an existing sick child. The research undertaken in this thesis examines the regulatory frameworks overseeing the delivery of assisted reproductive technologies (ART) in Australia and the United Kingdom and considers how those frameworks impact on the accessibility of in vitro fertilisation (IVF) procedures for the creation of ‘saviour siblings’. In some jurisdictions, the accessibility of such techniques is limited by statutory requirements. The limitations and restrictions imposed by the state in relation to the technology are analysed in order to establish whether such restrictions are justified. The analysis is conducted on the basis of a harm framework. The framework seeks to establish whether those affected by the use of the technology (including the child who will be created) are harmed. In order to undertake such evaluation, the concept of harm is considered under the scope of John Stuart Mill’s liberal theory and the Harm Principle is used as a normative tool to judge whether the level of harm that may result, justifies state intervention or restriction with the reproductive decision-making of parents in this context. The harm analysis conducted in this thesis seeks to determine an appropriate regulatory response in relation to the use of pre-implantation tissue-typing for the creation of ‘saviour siblings’. The proposals outlined in the last part of this thesis seek to address the concern that harm may result from the practice of pre-implantation tissue-typing. The current regulatory frameworks in place are also analysed on the basis of the harm framework established in this thesis. The material referred to in this thesis reflects the law and policy in place in Australia and the UK at the time the thesis was submitted for examination (December 2009).