533 resultados para Pesticide application
Resumo:
An ironless motor for use as direct wheel drive is presented. The motor is intended for use in a lightweight (600kg), low drag, series hybrid commuter vehicle under development at The University of Queensland. The vehicle will utilise these ironless motors in each of its rear wheels, with each motor producing a peak torque output of 500Nm and a maximum rotational speed of 1500rpm. The axial flux motor consists of twin Ironless litz wire stators with a central magnetic ring and simplified Halbach magnet arrays on either side. A small amount of iron is used to support the outer Halbach arrays and to improve the peak magnetic flux density. Ducted air cooling is used to remove heat from the motor and will allow for a continuous torque rating of 250Nm. Ironless machines have previously been shown to be effective in high speed, high frequency applications (+1000Hz). They are generally regarded as non-optimal for low speed applications as iron cores allow for better magnet utilisation and do not significantly increase the weight of a machine. However, ironless machines can also be seen to be effective in applications where the average torque requirement is much lower than the peak torque requirement such as in some vehicle drive applications. The low spinning losses in ironless machines are shown to result in very high energy throughput efficiency in a wide range of vehicle driving cycles.
Application of near infrared (NIR) spectroscopy for determining the thickness of articular cartilage
Resumo:
The determination of the characteristics of articular cartilage such as thickness, stiffness and swelling, especially in the form that can facilitate real-time decisions and diagnostics is still a matter for research and development. This paper correlates near infrared spectroscopy with mechanically measured cartilage thickness to establish a fast, non-destructive, repeatable and precise protocol for determining this tissue property. Statistical correlation was conducted between the thickness of bovine cartilage specimens (n = 97) and regions of their near infrared spectra. Nine regions were established along the full absorption spectrum of each sample and were correlated with the thickness using partial least squares (PLS) regression multivariate analysis. The coefficient of determination (R2) varied between 53 and 93%, with the most predictive region (R2 = 93.1%, p < 0.0001) for cartilage thickness lying in the region (wavenumber) 5350–8850 cm−1. Our results demonstrate that the thickness of articular cartilage can be measured spectroscopically using NIR light. This protocol is potentially beneficial to clinical practice and surgical procedures in the treatment of joint disease such as osteoarthritis.
Resumo:
Collecting regular personal reflections from first year teachers in rural and remote schools is challenging as they are busily absorbed in their practice, and separated from each other and the researchers by thousands of kilometres. In response, an innovative web-based solution was designed to both collect data and be a responsive support system for early career teachers as they came to terms with their new professional identities within rural and remote school settings. Using an emailed link to a web-based application named goingok.com, the participants are charting their first year plotlines using a sliding scale from ‘distressed’, ‘ok’ to ‘soaring’ and describing their self-assessment in short descriptive posts. These reflections are visible to the participants as a developing online journal, while the collections of de-identified developing plotlines are visible to the research team, alongside numerical data. This paper explores important aspects of the design process, together with the challenges and opportunities encountered in its implementation. A number of the key considerations for choosing to develop a web application for data collection are initially identified, and the resultant application features and scope are then examined. Examples are then provided about how a responsive software development approach can be part of a supportive feedback loop for participants while being an effective data collection process. Opportunities for further development are also suggested with projected implications for future research.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
Purpose Videokeratoscopy images can be used for the non-invasive assessment of the tear film. In this work the applicability of an image processing technique, textural-analysis, for the assessment of the tear film in Placido disc images has been investigated. Methods In the presence of tear film thinning/break-up, the reflected pattern from the videokeratoscope is disturbed in the region of tear film disruption. Thus, the Placido pattern carries information about the stability of the underlying tear film. By characterizing the pattern regularity, the tear film quality can be inferred. In this paper, a textural features approach is used to process the Placido images. This method provides a set of texture features from which an estimate of the tear film quality can be obtained. The method is tested for the detection of dry eye in a retrospective dataset from 34 subjects (22-normal and 12-dry eye), with measurements taken under suppressed blinking conditions. Results To assess the capability of each texture-feature to discriminate dry eye from normal subjects, the receiver operating curve (ROC) was calculated and the area under the curve (AUC), specificity and sensitivity extracted. For the different features examined, the AUC value ranged from 0.77 to 0.82, while the sensitivity typically showed values above 0.9 and the specificity showed values around 0.6. Overall, the estimated ROCs indicate that the proposed technique provides good discrimination performance. Conclusions Texture analysis of videokeratoscopy images is applicable to study tear film anomalies in dry eye subjects. The proposed technique appears to have demonstrated its clinical relevance and utility.
Resumo:
Objective Surveillance programs and research for acute respiratory infections in remote Aboriginal communities are complicated by difficulties in the storage and transport of frozen samples to urban laboratories for testing. This study assessed the sensitivity of a simple method for transporting respiratory samples from a remote setting for viral PCR compared with frozen specimens. Methods We sampled every individual who presented to a remote Aboriginal community clinic in a non-epidemic respiratory season. Two anterior nasal swabs were collected from each participant. The left nare specimen was mailed to the laboratory via routine postal services. The right nare specimen was transported frozen. Testing for 16 viruses was undertaken using real-time multiplex PCR. Results A total of 140 participants were enrolled who contributed 150 study visits. Respiratory illnesses accounted for 10% of the reasons for presentation. Sixty-one viruses were identified in 50 (33.3%) presentations for 40 (28.6%) individuals; bocavirus and rhinovirus were the most common viruses identified (14.0% and 12.6% of episodes respectively). The sensitivity for any virus detected in mailed specimens was 67.2% (95%CI 55.4, 78.9) compared to 65.6% (95%CI 53.7, 77.5) for frozen specimens. Conclusion The mailing of unfrozen nasal specimens from remote communities does not compromise the viability of the specimen for viral studies.
Resumo:
This special issue of the Journal of Urban Technology brings together five articles that are based on presentations given at the Street Computing Workshop held on 24 November 2009 in Melbourne in conjunction with the Australian Computer- Human Interaction conference (OZCHI 2009). Our own article introduces the Street Computing vision and explores the potential, challenges, and foundations of this research trajectory. In order to do so, we first look at the currently available sources of information and discuss their link to existing research efforts. Section 2 then introduces the notion of Street Computing and our research approach in more detail. Section 3 looks beyond the core concept itself and summarizes related work in this field of interest. We conclude by introducing the papers that have been contributed to this special issue.
Resumo:
This position paper describes the work in progress towards the goal of building a technical prototype that enables users – those who have little or no knowledge and experience engaging in urban agriculture – to receive information personalised to their location and situation, and allow them to ask questions and share experiences with others. We describe the design process thus far, informed by a survey and a workshop with experts in the field, before concluding with the future direction of this work.
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.
Resumo:
Background The implementation of the Australian Consumer Law in 2011 highlighted the need for better use of injury data to improve the effectiveness and responsiveness of product safety (PS) initiatives. In the PS system, resources are allocated to different priority issues using risk assessment tools. The rapid exchange of information (RAPEX) tool to prioritise hazards, developed by the European Commission, is currently being adopted in Australia. Injury data is required as a basic input to the RAPEX tool in the risk assessment process. One of the challenges in utilising injury data in the PS system is the complexity of translating detailed clinical coded data into broad categories such as those used in the RAPEX tool. Aims This study aims to translate hospital burns data into a simplified format by mapping the International Statistical Classification of Disease and Related Health Problems (Tenth Revision) Australian Modification (ICD-10-AM) burn codes into RAPEX severity rankings, using these rankings to identify priority areas in childhood product-related burns data. Methods ICD-10-AM burn codes were mapped into four levels of severity using the RAPEX guide table by assigning rankings from 1-4, in order of increasing severity. RAPEX rankings were determined by the thickness and surface area of the burn (BSA) with information extracted from the fourth character of T20-T30 codes for burn thickness, and the fourth and fifth characters of T31 codes for the BSA. Following the mapping process, secondary data analysis of 2008-2010 Queensland Hospital Admitted Patient Data Collection (QHAPDC) paediatric data was conducted to identify priority areas in product-related burns. Results The application of RAPEX rankings in QHAPDC burn data showed approximately 70% of paediatric burns in Queensland hospitals were categorised under RAPEX levels 1 and 2, 25% under RAPEX 3 and 4, with the remaining 5% unclassifiable. In the PS system, prioritisations are made to issues categorised under RAPEX levels 3 and 4. Analysis of external cause codes within these levels showed that flammable materials (for children aged 10-15yo) and hot substances (for children aged <2yo) were the most frequently identified products. Discussion and conclusions The mapping of ICD-10-AM burn codes into RAPEX rankings showed a favourable degree of compatibility between both classification systems, suggesting that ICD-10-AM coded burn data can be simplified to more effectively support PS initiatives. Additionally, the secondary data analysis showed that only 25% of all admitted burn cases in Queensland were severe enough to trigger a PS response.
Resumo:
The method of generalized estimating equations (GEE) is a popular tool for analysing longitudinal (panel) data. Often, the covariates collected are time-dependent in nature, for example, age, relapse status, monthly income. When using GEE to analyse longitudinal data with time-dependent covariates, crucial assumptions about the covariates are necessary for valid inferences to be drawn. When those assumptions do not hold or cannot be verified, Pepe and Anderson (1994, Communications in Statistics, Simulations and Computation 23, 939–951) advocated using an independence working correlation assumption in the GEE model as a robust approach. However, using GEE with the independence correlation assumption may lead to significant efficiency loss (Fitzmaurice, 1995, Biometrics 51, 309–317). In this article, we propose a method that extracts additional information from the estimating equations that are excluded by the independence assumption. The method always includes the estimating equations under the independence assumption and the contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We apply the method to a longitudinal study of the health of a group of Filipino children.
Resumo:
The properties of ellipsoidal nanowires are yet to be examined. They have likely applications in sensing, solar cells, microelectronics and cloaking devices. Little is known of the qualities that ellipse nanowires exhibit as we vary the aspect ratio with different dielectric materials and how varying these attributes affects plasmon coupling and propagation. It is known that the distance a plasmon can travel is further if it is supported by a thicker circular nanowire, while thinner nanowires are expected to be able to increase QD coupling. Ellipsoidal nanowires may be a good compromise due to their ability to have both thin and thick dimensions. Furthermore it has been shown that the plasmon resonances along the main axis of an ellipsoidal particle is governed by the relative aspect ratio of the ellipsoid, which may lead to further control of the plasmon. Research was done by the use of COMSOL Multiphysics by looking at the fundamental plasmon mode supported by an ellipsoidal nanowire and then studying this mode for various geometrical parameters, materials and illumination wavelength. Accordingly it was found that ellipsoidal nanowires exhibit a minimum for the wavenumber and a maximum for the propagation distance at roughly the same dimensions - Highlighting that there is an aspect ratio for which there is poor coupling but low loss. Here we investigate these and related attributes.
Resumo:
The education sector has dramatically changed in the past half decade. In a time of globalisation of education and tightening budgets, various paradigm shifts and challenges have rapidly changed learning and teaching. These include: meeting student expectation for more engaging, more interactive learning experiences, the increased focus to deliver content online, and the complexities of fast-changing technologies. Rising to these challenges and responding to them is a complex and multi-faceted task. This paper discusses educational theories and issues and explores current educational practices in the context of teaching undergraduate students via distance education in the university context. A case study applies a framework drawn from engineering education using the learner-centric concept of academagogy. Results showed that academagogy actively empowers students to build effective learning, and engages facilitators in meaningful teaching and delivery methods.
Resumo:
Forward genetic screens have identified numerous genes involved in development and metabolism, and remain a cornerstone of biological research. However, to locate a causal mutation, the practice of crossing to a polymorphic background to generate a mapping population can be problematic if the mutant phenotype is difficult to recognize in the hybrid F2 progeny, or dependent on parental specific traits. Here in a screen for leaf hyponasty mutants, we have performed a single backcross of an Ethane Methyl Sulphonate (EMS) generated hyponastic mutant to its parent. Whole genome deep sequencing of a bulked homozygous F2 population and analysis via the Next Generation EMS mutation mapping pipeline (NGM) unambiguously determined the causal mutation to be a single nucleotide polymorphisim (SNP) residing in HASTY, a previously characterized gene involved in microRNA biogenesis. We have evaluated the feasibility of this backcross approach using three additional SNP mapping pipelines; SHOREmap, the GATK pipeline, and the samtools pipeline. Although there was variance in the identification of EMS SNPs, all returned the same outcome in clearly identifying the causal mutation in HASTY. The simplicity of performing a single parental backcross and genome sequencing a small pool of segregating mutants has great promise for identifying mutations that may be difficult to map using conventional approaches.
Resumo:
Recent studies of gene silencing in plants have revealed two RNA-mediated epigenetic processes, RNA-directed RNA degradation and RNA-directed DNA methylation. These natural processes have provided new avenues for developing high-efficiency, high-throughput technology for gene suppression in plants.