272 resultados para Apparent Return Rate
Resumo:
This paper presents a proposed qualitative framework to discuss the heterogeneous burning of metallic materials, through parameters and factors that influence the melting rate of the solid metallic fuel (either in a standard test or in service). During burning, the melting rate is related to the burning rate and is therefore an important parameter for describing and understanding the burning process, especially since the melting rate is commonly recorded during standard flammability testing for metallic materials and is incorporated into many relative flammability ranking schemes. However, whilst the factors that influence melting rate (such as oxygen pressure or specimen diameter) have been well characterized, there is a need for an improved understanding of how these parameters interact as part of the overall melting and burning of the system. Proposed here is the Melting Rate Triangle, which aims to provide this focus through a conceptual framework for understanding how the melting rate (of solid fuel) is determined and regulated during heterogeneous burning. In the paper, the proposed conceptual model is shown to be both (a) consistent with known trends and previously observed results, and (b)capable of being expanded to incorporate new data. Also shown are examples of how the Melting Rate Triangle can improve the interpretation of flammability test results. Slusser and Miller previously published an Extended Fire Triangle as a useful conceptual model of ignition and the factors affecting ignition, providing industry with a framework for discussion. In this paper it is shown that a Melting Rate Triangle provides a similar qualitative framework for burning, leading to an improved understanding of the factors affecting fire propagation and extinguishment.
Resumo:
Purpose: This two-part research project was undertaken as part of the planning process by Queensland Health (QH), Cancer Screening Services Unit (CSSU), Queensland Bowel Cancer Screening Program (QBCSP), in partnership with the National Bowel Cancer Screening Program (NBCSP), to prepare for the implementation of the NBCSP in public sector colonoscopy services in QLD in late 2006. There was no prior information available on the quality of colonoscopy services in Queensland (QLD) and no prior studies that assessed the quality of colonoscopy training in Australia. Furthermore, the NBCSP was introduced without extra funding for colonoscopy service improvement or provision for increases in colonoscopic capacity resulting from the introduction of the NBCSP. The main purpose of the research was to record baseline data on colonoscopy referral and practice in QLD and current training in colonoscopy Australia-wide. It was undertaken from a quality improvement perspective. Implementation of the NBCSP requires that all aspects of the screening pathway, in particular colonoscopy services for the assessment of positive Faecal Occult Blood Tests (FOBTs), will be effective, efficient, equitable and evidence-based. This study examined two important aspects of the continuous quality improvement framework for the NBCSP as they relate to colonoscopy services: (1) evidence-based practice, and (2) quality of colonoscopy training. The Principal Investigator was employed as Senior Project Officer (Training) in the QBCSP during the conduct of this research project. Recommendations from this research have been used to inform the development and implementation of quality improvement initiatives for provision of colonoscopy in the NBCSP, its QLD counterpart the QBCSP and colonoscopy services in QLD, in general. Methods Part 1 Chart audit of evidence-based practice: The research was undertaken in two parts from 2005-2007. The first part of this research comprised a retrospective chart audit of 1484 colonoscopy records (some 13% of all colonoscopies conducted in public sector facilities in the year 2005) in three QLD colonoscopy services. Whilst some 70% of colonoscopies are currently conducted in the private sector, only public sector colonoscopy facilities provided colonoscopies under the NBCSP. The aim of this study was to compare colonoscopy referral and practice with explicit criteria derived from the National Health & Medical Research Council (NHMRC) (1999) Clinical Practice Guidelines for the Prevention, Early Detection and Management of Colorectal Cancer, and describe the nature of variance with the guidelines. Symptomatic presentations were the most common indication for colonoscopy (60.9%). These comprised per rectal bleeding (31.0%), change of bowel habit (22.1%), abdominal pain (19.6%), iron deficiency anaemia (16.2%), inflammatory bowel disease (8.9%) and other symptoms (11.4%). Surveillance and follow-up colonoscopies accounted for approximately one-third of the remaining colonoscopy workload across sites. Gastroenterologists (GEs) performed relatively more colonoscopies per annum (59.9%) compared to general surgeons (GS) (24.1%), colorectal surgeons (CRS) (9.4%) and general physicians (GPs) (6.5%). Guideline compliance varied with the designation of the colonoscopist. Compliance was lower for CRS (62.9%) compared to GPs (76.0%), GEs (75.0%), GSs (70.9%, p<0.05). Compliance with guideline recommendations for colonoscopic surveillance for family history of colorectal cancer (23.9%), polyps (37.0%) and a past history of bowel cancer (42.7%), was by comparison significantly lower than for symptomatic presentations (94.4%), (p<0.001). Variation with guideline recommendations occurred more frequently for polyp surveillance (earlier than guidelines recommend, 47.9%) and follow-up for past history of bowel cancer (later than recommended, 61.7%, p<0.001). Bowel cancer cases detected at colonoscopy comprised 3.6% of all audited colonoscopies. Incomplete colonoscopies occurred in 4.3% of audited colonoscopies and were more common among women (76.6%). For all colonoscopies audited, the rate of incomplete colonoscopies for GEs was 1.6% (CI 0.9-2.6), GPs 2.0% (CI 0.6-7.2), GS 7.0% (CI 4.8-10.1) and CRS 16.4% (CI 11.2-23.5). 18.6% (n=55) of patients with a documented family history of bowel cancer had colonoscopy performed against guidelines recommendations (for general (category 1) population risk, for reasons of patient request or family history of polyps, rather than for high risk status for colorectal cancer). In general, family history was inadequately documented and subsequently applied to colonoscopy referral and practice. Methods - Part 2 Surveys of quality of colonoscopy training: The second part of the research consisted of Australia-wide anonymous, self-completed surveys of colonoscopy trainers and their trainees to ascertain their opinions on the current apprenticeship model of colonoscopy in Australia and to identify any training needs. Overall, 127 surveys were received from colonoscopy trainers (estimated response rate 30.2%). Approximately 50% of trainers agreed and 27% disagreed that current numbers of training places were adequate to maintain a skilled colonoscopy workforce in preparation for the NBCSP. Approximately 70% of trainers also supported UK-style colonoscopy training within dedicated accredited training centres using a variety of training approaches including simulation. A collaborative approach with the private sector was seen as beneficial by 65% of trainers. Non-gastroenterologists (non-GEs) were more likely than GEs to be of the opinion that simulators are beneficial for colonoscopy training (2-test = 5.55, P = 0.026). Approximately 60% of trainers considered that the current requirements for recognition of training in colonoscopy could be insufficient for trainees to gain competence and 80% of those indicated that 200 colonoscopies were needed. GEs (73.4%) were more likely than non-GEs (36.2%) to be of the opinion that the Conjoint Committee standard is insufficient to gain competence in colonoscopy (2-test = 16.97, P = 0.0001). The majority of trainers did not support training either nurses (73%) or GPs in colonoscopy (71%). Only 81 (estimated response rate 17.9%) surveys were received from GS trainees (72.1%), GE trainees (26.3%) and GP trainees (1.2%). The majority were males (75.9%), with a median age 32 years and who had trained in New South Wales (41.0%) or Victoria (30%). Overall, two-thirds (60.8%) of trainees indicated that they deemed the Conjoint Committee standard sufficient to gain competency in colonoscopy. Between specialties, 75.4% of GS trainees indicated that the Conjoint Committee standard for recognition of colonoscopy was sufficient to gain competence in colonoscopy compared to only 38.5% of GE trainees. Measures of competency assessed and recorded by trainees in logbooks centred mainly on caecal intubation (94.7-100%), complications (78.9-100%) and withdrawal time (51-76.2%). Trainees described limited access to colonoscopy training lists due to the time inefficiency of the apprenticeship model and perceived monopolisation of these by GEs and their trainees. Improvements to the current training model suggested by trainees included: more use of simulation, training tools, a United Kingdom (UK)-style training course, concentration on quality indicators, increased access to training lists, accreditation of trainers and interdisciplinary colonoscopy training. Implications for the NBCSP/QBCSP: The introduction of the NBCSP/QBCSP necessitates higher quality colonoscopy services if it is to achieve its ultimate goal of decreasing the incidence of morbidity and mortality associated with bowel cancer in Australia. This will be achieved under a new paradigm for colonoscopy training and implementation of evidence-based practice across the screening pathway and specifically targeting areas highlighted in this thesis. Recommendations for improvement of NBCSP/QBCSP effectiveness and efficiency include the following: 1. Implementation of NBCSP and QBCSP health promotion activities that target men, in particular, to increase FOBT screening uptake. 2. Improved colonoscopy training for trainees and refresher courses or retraining for existing proceduralists to improve completion rates (especially for female NBCSP/QBCSP participants), and polyp and adenoma detection and removal, including newer techniques to detect flat and depressed lesions. 3. Introduction of colonoscopy training initiatives for trainees that are aligned with NBCSP/QBCSP colonoscopy quality indicators, including measurement of training outcomes using objective quality indicators such as caecal intubation, withdrawal time, and adenoma detection rate. 4. Introduction of standardised, interdisciplinary colonoscopy training to reduce apparent differences between specialties with regard to compliance with guideline recommendations, completion rates, and quality of polypectomy. 5. Improved quality of colonoscopy training by adoption of a UK-style training program with centres of excellence, incorporating newer, more objective assessment methods, use of a variety of training tools such as simulation and rotations of trainees between metropolitan, rural, and public and private sector training facilities. 6. Incorporation of NHMRC guidelines into colonoscopy information systems to improve documentation, provide guideline recommendations at the point of care, use of gastroenterology nurse coordinators to facilitate compliance with guidelines and provision of guideline-based colonoscopy referral letters for GPs. 7. Provision of information and education about the NBCSP/QBCSP, bowel cancer risk factors, including family history and polyp surveillance guidelines, for participants, GPs and proceduralists. 8. Improved referral of NBCSP/QBCSP participants found to have a high-risk family history of bowel cancer to appropriate genetics services.
Resumo:
Understanding the complexities that are involved in the genetics of multifactorial diseases is still a monumental task. In addition to environmental factors that can influence the risk of disease, there is also a number of other complicating factors. Genetic variants associated with age of disease onset may be different from those variants associated with overall risk of disease, and variants may be located in positions that are not consistent with the traditional protein coding genetic paradigm. Latent Variable Models are well suited for the analysis of genetic data. A latent variable is one that we do not directly observe, but which is believed to exist or is included for computational or analytic convenience in a model. This thesis presents a mixture of methodological developments utilising latent variables, and results from case studies in genetic epidemiology and comparative genomics. Epidemiological studies have identified a number of environmental risk factors for appendicitis, but the disease aetiology of this oft thought useless vestige remains largely a mystery. The effects of smoking on other gastrointestinal disorders are well documented, and in light of this, the thesis investigates the association between smoking and appendicitis through the use of latent variables. By utilising data from a large Australian twin study questionnaire as both cohort and case-control, evidence is found for the association between tobacco smoking and appendicitis. Twin and family studies have also found evidence for the role of heredity in the risk of appendicitis. Results from previous studies are extended here to estimate the heritability of age-at-onset and account for the eect of smoking. This thesis presents a novel approach for performing a genome-wide variance components linkage analysis on transformed residuals from a Cox regression. This method finds evidence for a dierent subset of genes responsible for variation in age at onset than those associated with overall risk of appendicitis. Motivated by increasing evidence of functional activity in regions of the genome once thought of as evolutionary graveyards, this thesis develops a generalisation to the Bayesian multiple changepoint model on aligned DNA sequences for more than two species. This sensitive technique is applied to evaluating the distributions of evolutionary rates, with the finding that they are much more complex than previously apparent. We show strong evidence for at least 9 well-resolved evolutionary rate classes in an alignment of four Drosophila species and at least 7 classes in an alignment of four mammals, including human. A pattern of enrichment and depletion of genic regions in the profiled segments suggests they are functionally significant, and most likely consist of various functional classes. Furthermore, a method of incorporating alignment characteristics representative of function such as GC content and type of mutation into the segmentation model is developed within this thesis. Evidence of fine-structured segmental variation is presented.
Resumo:
Principal Topic: In this study we investigate how strategic orientation moderates the impact of growth on profitability for a sample of Danish high growth (Gazelle) firms. ---------- Firm growth has been an essential part of both management research and entrepreneurship research for decades (e.g. Penrose 1959, Birch 1987, Storey 1994). From a societal point of view, firm growth has been perceived as economic generator and job creator. In entrepreneurship research, growth has been an important part of the field (Davidsson, Delmar and Wiklund 2006), and many have used growth as a measure of success. In strategic management, growth has been seen as an approach to achieve competitive advantages and a way of becoming increasing profitable (e.g. Russo and Fouts 1997, Cho and Pucic 2005). However, although firm growth used to be perceived as a natural pathway to profitability recently more skepticism has emerged due to both new theoretical development and new empirical insights. Empirically, studies show inconsistent and inconclusive empirical evidence regarding the impact of growth on profitability. Our review reveals that some studies find a substantial positive relationship, some find a weak positive relationship, some find no relationship and further some find a negative relationship. Overall, two dominant yet divergent theoretical positions can be identified. The first position, mainly focusing on the environmental fit, argues that firms are likely to become more profitable if they enter a market quickly and on a larger scale due to first mover advantages and economic of scale. The second position, mainly focusing the internal fit, argues that growth may lead to a range of internal challenges and difficulties, including rapid change in structure, reward systems, decision making, communication and management style. The inconsistent empirical results together with two divergent theoretical positions call for further investigations into the circumstances by which growth generate profitability and into the circumstances by which growth do not generate profitability. In this project, we investigate how strategic orientations influence the impact of growth on profitability by asking the following research question: How is the impact of growth on profitability moderated by strategic orientation? Based on a literature review of how growth impacts profitability in areas such as entrepreneurship, strategic management and strategic entrepreneurship we develop three hypotheses regarding the growth-profitability relationship and strategic orientation as a potential moderator. ---------- Methodology/Key Propositions: The three hypotheses are tested on data collected in 2008. All firms in Denmark, including all listed and non-listed (VAT-registered) firms who experienced a 100 % growth and had a positive sales or gross profit over a four years period (2004-2007) were surveyed. In total 2,475 fulfilled the requirements. Among those 1,107 firms returned usable questionnaires satisfactory giving us a response rate on 45 %. The financial data together with data on number of employees were obtained from D&B (previously Dun & Bradstreet). The remaining data were obtained through the survey. Hierarchical regression models with ROA (return on assets) as the dependent variable were used to test the hypotheses. In the first model control variables including region, industry, firm age, CEO age, CEO gender, CEO education and number of employees were entered. In the second model, growth measured as growth in employees was entered. Then strategic orientation (differentiation, cost leadership, focus differentiation and focus cost leadership) and then interaction effects of strategic orientation and growth were entered in the model. ---------- Results and Implications: The results show a positive impact of firm growth on profitability and further that this impact is moderated by strategic orientation. Specifically, it was found that growth has a larger impact on profitability when firms do not pursue a focus strategy including both focus differentiation and focus cost leadership. Our preliminary interpretation of the results suggests that the value of growth depends on the circumstances and more specifically 'how much is left to fight for'. It seems like those firms who target towards a narrow segment are less likely to gain value of growth. The remaining market shares to fight for to these firms are not large enough to compensate for the cost of growing. Based on our findings, it therefore seems like growth has a more positive relationship with profitability for those who approach a broad market segment. Furthermore we argue that firms pursuing af Focus strategy will have more specialized assets that decreases the possibilities of further profitable expansion. For firms, CEOs, board of directors etc., the study shows that high growth is not necessarily something worth aiming for. It is a trade-off between the cost of growing and the value of growing. For many firms, there might be better ways of generating profitability in the long run. It depends on the strategic orientation of the firm. For advisors and consultants, the conditional value of growth implies that in-depth knowledge on their clients' situation is necessary before any advice can be given. And finally, for policy makers, it means they have to be careful when initiating new policies to promote firm growth. They need to take into consideration firm strategy and industry conditions.
Resumo:
It is widely held that strong relationships exist between housing, economic status, and well being. This is exemplified by widespread housing stock surpluses in many countries which threaten to destabilise numerous aspects related to individuals and community. However, the position of housing demand and supply is not consistent. The Australian position provides a distinct contrast whereby seemingly inexorable housing demand generally remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand ensures elements related to housing affordability continue to gain prominence. A significant, but less visible factor impacting housing affordability particularly new housing development relates to holding costs. These costs are in many ways hidden and cannot always be easily identified. Although it is only one contributor, the nature and extent of its impact requires elucidation. In its simplest form, it commences with a calculation of the interest or opportunity cost of land holding. However, there is significantly more complexity for major new developments - particularly greenfield property development. Preliminary analysis conducted by the author suggests that even small shifts in primary factors impacting holding costs can appreciably affect housing affordability and notably, to a greater extent than commonly held. Even so, their importance and perceived high level impact can be gauged from the unprecedented level of attention policy makers have given them over recent years. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs, thereby affecting the assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. Some forms of holding costs are not as visible as the more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment, based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. By extending research in the general area of housing affordability, this thesis seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. This will involve the development of soundly based economic and econometric models which seek to clarify the componentry impacts of holding costs. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.
Resumo:
During the resorbable-polymer-boom of the 1970s and 1980s, polycaprolactone (PCL) was used in the biomaterials field and a number of drug-delivery devices. Its popularity was soon superseded by faster resorbable polymers which had fewer perceived disadvantages associated with long term degradation (up to 3-4 years) and intracellular resorption pathways; consequently, PCL was almost forgotten for most of two decades. Recently, a resurgence of interest has propelled PCL back into the biomaterials-arena. The superior rheological and viscoelastic properties over many of its aliphatic polyester counterparts renders PCL easy to manufacture and manipulate into a large range of implants and devices. Coupled with relatively inexpensive production routes and FDA approval, this provides a promising platform for the production of longer-term degradable implants which may be manipulated physically, chemically and biologically to possess tailorable degradation kinetics to suit a specific anatomical site. This review will discuss the application of PCL as a biomaterial over the last two decades focusing on the advantages which have propagated its return into the spotlight with a particular focus on medical devices, drug delivery and tissue engineering.
Resumo:
Background The preservation of meniscal tissue is important to protect joint surfaces. Purpose We have an aggressive approach to meniscal repair, including repairing tears other than those classically suited to repair. Here we present the medium- to long-term outcome of meniscal repair (inside-out) in elite athletes. Study Design Case series; Level of evidence, 4. Methods Forty-two elite athletes underwent 45 meniscal repairs. All repairs were performed using an arthroscopically assisted inside-out technique. Eighty-three percent of these athletes had ACL reconstruction at the same time. Patients returned a completed questionnaire (including Lysholm and International Knee Documentation Committee [IKDC] scores). Mean follow-up was 8.5 years. Failure was defined by patients developing symptoms of joint line pain and/or locking or swelling requiring repeat arthroscopy and partial meniscectomy. Results The average Lysholm and subjective IKDC scores were 89.6 and 85.4, respectively. Eighty-one percent of patients returned to their main sport and most to a similar level at a mean time of 10.4 months after repair, reflecting the high level of ACL reconstruction in this group. We identified 11 definite failures, 10 medial and 1 lateral meniscus, that required excision; this represents a 24% failure rate. We identified 1 further patient who had possible failed repairs, giving a worst-case failure rate of 26.7% at a mean of 42 months after surgery. However, 7 of these failures were associated with a further injury. Therefore, the atraumatic failure rate was 11%. Age and size and location of the tears were not associated with a higher failure rate. Medial meniscal repairs were significantly more likely to fail than lateral meniscal repairs, with a failure rate of 36.4% and 5.6%, respectively (P < .05). Conclusion Meniscal repair and healing are possible, and most elite athletes can return to their preinjury level of activity.
Resumo:
In 1859, Queensland was separated from New South Wales as an independent colony. At this time the new Governor conspired to ensure the citizens did not inherit the old colonies system of full male suffrage. This was not returned until the Elections Act of 1872. However, the extended franchise was not a result of either democratic values or other ideological intentions. This article will analyse parliamentary debates to show that the revision to full suffrage was a result of administrative expediency driven by an inability to prevent abuse of the limited franchise.
Resumo:
The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several stay areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows movement between the stay areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal species.
Resumo:
This paper introduces an energy-efficient Rate Adaptive MAC (RA-MAC) protocol for long-lived Wireless Sensor Networks (WSN). Previous research shows that the dynamic and lossy nature of wireless communication is one of the major challenges to reliable data delivery in a WSN. RA-MAC achieves high link reliability in such situations by dynamically trading off radio bit rate for signal processing gain. This extra gain reduces the packet loss rate which results in lower energy expenditure by reducing the number of retransmissions. RA-MAC selects the optimal data rate based on channel conditions with the aim of minimizing energy consumption. We have implemented RA-MAC in TinyOS on an off-the-shelf sensor platform (TinyNode), and evaluated its performance by comparing RA-MAC with state-ofthe- art WSN MAC protocol (SCP-MAC) by experiments.
Resumo:
Osteoporotic spinal fractures are a major concern in ageing Western societies. This study develops a multi-scale finite element (FE) model of the osteoporotic lumbar vertebral body to study the mechanics of vertebral compression fracture at both the apparent (whole vertebral body) and micro-structural (internal trabecular bone core)levels. Model predictions were verified against experimental data, and found to provide a reasonably good representation of the mechanics of the osteoporotic vertebral body. This novel modelling methodology will allow detailed investigation of how trabecular bone loss in osteoporosis affects vertebral stiffness and strength in the lumbar spine.
Resumo:
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a white list filter in a firewall as part of the mitigation strategy.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
The changing ownership of roles in organisational work-life leads this paper to examine what universities are doing in their academic development practice through research at an Australian university where artful collaboration with the real world aims to build capability for innovative academic community engagement. The paper also presents findings on the return on expectations (Hodges, 2004) of community engagement for both academics and their organisational supervisors.