936 resultados para reliable narrator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents research that is being conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) with the aim of investigating the use of wireless sensor networks for automated livestock monitoring and control. It is difficult to achieve practical and reliable cattle monitoring with current conventional technologies due to challenges such as large grazing areas of cattle, long time periods of data sampling, and constantly varying physical environments. Wireless sensor networks bring a new level of possibilities into this area with the potential for greatly increased spatial and temporal resolution of measurement data. CSIRO has created a wireless sensor platform for animal behaviour monitoring where we are able to observe and collect information of animals without significantly interfering with them. Based on such monitoring information, we can identify each animal's behaviour and activities successfully

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanisms of helicopter flight create a unique, high-vibration environment which can play havoc with the accurate operation of on-board sensors. Vibration isolation of electronic sensors from structural borne oscillations is paramount to their reliable and accurate use. Effective isolation is achieved by realising a trade-off between the properties of the suspended instrument package, and the isolation mechanism. This is made more difficult as the weight and size of the sensors and computing hardware decreases with advances in technology. This paper presents a history of the design, challenges, constraints and construction of an integrated isolated vision and sensor platform and landing gear for the CSIRO autonomous X-Cell helicopter. The results of isolation performance and in-flight tests of the platform in autonomous flight are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we outline the sensing system used for the visual pose control of our experimental car-like vehicle, the autonomous tractor. The sensing system consists of a magnetic compass, an omnidirectional camera and a low-resolution odometry system. In this work, information from these sensors is fused using complementary filters. Complementary filters provide a means of fusing information from sensors with different characteristics in order to produce a more reliable estimate of the desired variable. Here, the range and bearing of landmarks observed by the vision system are fused with odometry information and a vehicle model, providing a more reliable estimate of these states. We also present a method of combining a compass sensor with odometry and a vehicle model to improve the heading estimate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we outline the sensing system used for the visual pose control of our experimental car-like vehicle, the Autonomous Tractor. The sensing system consists of a magnetic compass, an omnidirectional camera and a low-resolution odometry system. In this work, information from these sensors is fused using complementary filters. Complementary filters provide a means of fusing information from sensors with different characteristics in order to produce a more reliable estimate of the desired variable. Here, the range and bearing of landmarks observed by the vision system are fused with odometry information and a vehicle model, providing a more reliable estimate of these states. We also present a method of combining a compass sensor with odometry and a vehicle model to improve the heading estimate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adolescent drinking is a significant issue yet valid psychometric tools designed for this group are scarce. The Drinking Refusal Self-Efficacy Questionnaire—Revised Adolescent Version (DRSEQ-RA) is designed to assess an individual's belief in their ability to resist drinking alcohol. The original DRSEQ-R consists of three factors reflecting social pressure refusal self-efficacy, opportunistic refusal self-efficacy and emotional relief refusal self-efficacy. A large sample of 2020 adolescents aged between 12 and 19 years completed the DRSEQ and measures of alcohol consumption in small groups. Using confirmatory factor analysis, the three factor structure was confirmed. All three factors were negatively correlated with both frequency and volume of alcohol consumption. Drinkers reported lower drinking refusal self-efficacy than non-drinkers. Taken together, these results suggest that the adolescent version of the Drinking Refusal Self-Efficacy Questionnaire (DRSEQ-RA) is a reliable and valid measure of drinking refusal self-efficacy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study assessed the reliability and validity of a palm-top-based electronic appetite rating system (EARS) in relation to the traditional paper and pen method. Twenty healthy subjects [10 male (M) and 10 female (F)] — mean age M=31 years (S.D.=8), F=27 years (S.D.=5); mean BMI M=24 (S.D.=2), F=21 (S.D.=5) — participated in a 4-day protocol. Measurements were made on days 1 and 4. Subjects were given paper and an EARS to log hourly subjective motivation to eat during waking hours. Food intake and meal times were fixed. Subjects were given a maintenance diet (comprising 40% fat, 47% carbohydrate and 13% protein by energy) calculated at 1.6×Resting Metabolic Rate (RMR), as three isoenergetic meals. Bland and Altman's test for bias between two measurement techniques found significant differences between EARS and paper and pen for two of eight responses (hunger and fullness). Regression analysis confirmed that there were no day, sex or order effects between ratings obtained using either technique. For 15 subjects, there was no significant difference between results, with a linear relationship between the two methods that explained most of the variance (r2 ranged from 62.6 to 98.6). The slope for all subjects was less than 1, which was partly explained by a tendency for bias at the extreme end of results on the EARS technique. These data suggest that the EARS is a useful and reliable technique for real-time data collection in appetite research but that it should not be used interchangeably with paper and pen techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reliability of Critical Infrastructure is considered to be a fundamental expectation of modern societies. These large-scale socio-technical systems have always, due to their complex nature, been faced with threats challenging their ongoing functioning. However, increasing uncertainty in addition to the trend of infrastructure fragmentation has made reliable service provision not only a key organisational goal, but a major continuity challenge: especially given the highly interdependent network conditions that exist both regionally and globally. The notion of resilience as an adaptive capacity supporting infrastructure reliability under conditions of uncertainty and change has emerged as a critical capacity for systems of infrastructure and the organisations responsible for their reliable management. This study explores infrastructure reliability through the lens of resilience from an organisation and system perspective using two recognised resilience-enhancing management practices, High Reliability Theory (HRT) and Business Continuity Management (BCM) to better understand how this phenomenon manifests within a partially fragmented (corporatised) critical infrastructure industry – The Queensland Electricity Industry. The methodological approach involved a single case study design (industry) with embedded sub-units of analysis (organisations), utilising in-depth interviews and document analysis to illicit findings. Derived from detailed assessment of BCM and Reliability-Enhancing characteristics, findings suggest that the industry as a whole exhibits resilient functioning, however this was found to manifest at different levels across the industry and in different combinations. Whilst there were distinct differences in respect to resilient capabilities at the organisational level, differences were less marked at a systems (industry) level, with many common understandings carried over from the pre-corporatised operating environment. These Heritage Factors were central to understanding the systems level cohesion noted in the work. The findings of this study are intended to contribute to a body of knowledge encompassing resilience and high reliability in critical infrastructure industries. The research also has value from a practical perspective, as it suggests a range of opportunities to enhance resilient functioning under increasingly interdependent, networked conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents an overview of technical solutions for regional area precise GNSS positioning services such as in Queensland. The research focuses on the technical and business issues that currently constrain GPS-based local area Real Time Kinematic (RTK) precise positioning services so as to operate in future across larger regional areas, and therefore support services in agriculture, mining, utilities, surveying, construction, and others. The paper first outlines an overall technical framework that has been proposed to transition the current RTK services to future larger scale coverage. The framework enables mixed use of different reference GNSS receiver types, dual- or triple-frequency, single or multiple systems, to provide RTK correction services to users equipped with any type of GNSS receivers. Next, data processing algorithms appropriate for triple-frequency GNSS signals are reviewed and some key performance benefits of using triple carrier signals for reliable RTK positioning over long distances are demonstrated. A server-based RTK software platform is being developed to allow for user positioning computations at server nodes instead of on the user's device. An optimal deployment scheme for reference stations across a larger-scale network has been suggested, given restrictions such as inter-station distances, candidates for reference locations, and operational modes. For instance, inter-station distances between triple-frequency receivers can be extended to 150km, which doubles the distance between dual-frequency receivers in the existing RTK network designs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. OBJECTIVES: To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. METHODS: The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. SETTING AND PARTICIPANTS: A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. RESULTS: Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child's symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of atopic dermatitis by the parent, involving healthcare professionals in management, and involving the child in the management of atopic dermatitis were found. Parents' self-efficacy and outcome expectations had a significant influence on self-reported task performance. CONCLUSIONS: Findings suggest that PEMS and POEEMS are valid and reliable instruments worthy of further psychometric evaluation. Likewise, validity and reliability of PASECI was confirmed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of the mechanical properties of thin films. The main aim was to determine the properties of sol-gel derived coatings. These films are used in a range of different applications and are known to be quite porous. Very little work has been carried out in this area and in order to study the mechanical properties of sol-gel films, some of the work was carried out on magnetron sputtered metal coatings in order to validate the techniques developed in this work. The main part of the work has concentrated on the development of various bending techniques to study the elastic modulus of the thin films, including both a small scale three-point bending, as well as a novel bi-axial bending technique based on a disk resting on three supporting balls. The bending techniques involve a load being applied to the sample being tested and the bending response to this force being recorded. These experiments were carried out using an ultra micro indentation system with very sensitive force and depth recording capabilities. By analysing the result of these forces and deflections using existing theories of elasticity, the elastic modulus may be determined. In addition to the bi-axial bending study, a finite element analysis of the stress distribution in a disk during bending was carried out. The results from the bi-axial bending tests of the magnetron sputtered films was confirmed by ultra micro indentation tests, giving information of the hardness and elastic modulus of the films. It was found that while the three point bending method gave acceptable results for uncoated steel substrates, it was very susceptible to slight deformations of the substrate. Improvements were made by more careful preparation of the substrates in order to avoid deformation. However the technique still failed to give reasonable results for coated specimens. In contrast, biaxial bending gave very reliable results even for very thin films and this technique was also found to be useful for determination of the properties of sol-gel coatings. In addition, an ultra micro indentation study of the hardness and elastic modulus of sol-gel films was conducted. This study included conventionally fired films as well as films ion implanted in a range of doses. The indentation tests showed that for implantation of H+ ions at doses exceeding 3x1016 ions/cm2, the mechanical properties closely resembled those of films that were conventionally fired to 450°C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dasheen mosaic potyvirus (DsMV) is an important virus affecting taro. The virus has been found wherever taro is grown and infects both the edible and ornamental aroids, causing yield losses of up to 60%. The presence of DsMV, and other viruses,prevents the international movement of taro germplasm between countries. This has a significant negative impact on taro production in many countries due to the inability to access improved taro lines produced in breeding programs. To overcome this problem, sensitive and reliable virus diagnostic tests need to be developed to enable the indexing of taro germplasm. The aim of this study was to generate an antiserum against a recombinant DsMV coat protein (CP) and to develop a serological-based diagnostic test that would detect Pacific Island isolates of the virus. The CP-coding region of 16 DsMV isolates from Papua New Guinea, Samoa, Solomon Islands, French Polynesia, New Caledonia and Vietnam were amplified,cloned and sequenced. The size of the CP-coding region ranged from 939 to 1038 nucleotides and encoded putative proteins ranged from 313 to 346 amino acids, with the molecular mass ranging from 34 to 38 kDa. Analysis ofthe amino acid sequences revealed the presence of several amino acid motifs typically found in potyviruses,including DAG, WCIE/DN, RQ and AFDF. When the amino acid sequences were compared with each other and the DsMV sequences on the database, the maximum variability was21.9%. When the core region ofthe CP was analysed, the maximum variability dropped to 6% indicating most variability was present in the N terminus. Within seven PNG isolates ofDsMV, the maximum variability was 16.9% and 3.9% over the entire CP-coding region and core region, respectively. The sequence ofPNG isolate P1 was most similar to all other sequences. Phylogenetic analysis indicated that almost all isolates grouped according to their provenance. Further, the seven PNG isolates were grouped according to the region within PNG from which they were obtained. Due to the extensive variability over the entire CP-coding region, the core region ofthe CP ofPNG isolate Pl was cloned into a protein expression vector and expressed as a recombinant protein. The protein was purified by chromatography and SDS-PAGE and used as an antigen to generate antiserum in a rabbit. In western blots, the antiserum reacted with bands of approximately 45-47 kDa in extracts from purified DsMV and from known DsMV -infected plants from PNG; no bands were observed using healthy plant extracts. The antiserum was subsequently incorporated into an indirect ELISA. This procedure was found to be very sensitive and detected DsMV in sap diluted at least 1:1,000. Using both western blot and ELISA formats,the antiserum was able to detect a wide range ofDsMV isolates including those from Australia, New Zealand, Fiji, French Polynesia, New Caledonia, Papua New Guinea, Samoa, Solomon Islands and Vanuatu. These plants were verified to be infected with DsMV by RT-PCR. In specificity tests, the antiserum was also found to react with sap from plants infected with SCMV, PRSV-P, PRSV-W, but not with PVY or CMV -infected plants.