537 resultados para pre-symptomatic testing
Resumo:
Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.
Resumo:
There is a song at the beginning of the musical, West Side Story, where the character Tony sings that “something’s coming, something good.” The song is an anthem of optimism, brimming with promise. This paper is about the long-held promise of information and communication technology (ICT) to transform teaching and learning, to modernise the learning environment of the classroom, and to create a new digital pedagogy. But much of our experience to date in the schooling sector tells more of resistance and reaction than revolution, of more of the same but with a computer in the corner and of ICT activities as unwelcome time-fillers/time-wasters. Recently, a group of pre-service teachers in a postgraduate primary education degree in an Australian university were introduced to learning objects in an ICT immersion program. Their analyses and related responses, as recorded in online journals, have here been interpreted in terms of TPACK (Technological Pedagogical and Content Knowledge). Against contemporary observation, these students generally displayed high levels of competence and highly positive dispositions of students to the integration of ICT in their future classrooms. In short, they displayed the same optimism and confidence as the fictional “Tony” in believing that something good was coming.
Resumo:
Background: Random Breath Testing (RBT) is the main drink driving law enforcement tool used throughout Australia. International comparative research considers Australia to have the most successful RBT program compared to other countries in terms of crash reductions (Erke, Goldenbeld, & Vaa, 2009). This success is attributed to the programs high intensity (Erke et al., 2009). Our review of the extant literature suggests that there is no research evidence that indicates an optimal level of alcohol breath testing. That is, we suggest that no research exists to guide policy regarding whether or not there is a point at which alcohol related crashes reach a point of diminishing returns as a result of either saturated or targeted RBT testing. Aims: In this paper we first provide an examination of RBTs and alcohol related crashes across Australian jurisdictions. We then address the question of whether or not an optimal level of random breath testing exists by examining the relationship between the number of RBTs conducted and the occurrence of alcohol-related crashes over time, across all Australian states. Method: To examine the association between RBT rates and alcohol related crashes and to assess whether an optimal ratio of RBT tests per licenced drivers can be determined we draw on three administrative data sources form each jurisdiction. Where possible data collected spans January 1st 2000 to September 30th 2012. The RBT administrative dataset includes the number of Random Breath Tests (RBTs) conducted per month. The traffic crash administrative dataset contains aggregated monthly count of the number of traffic crashes where an individual’s recorded BAC reaches or exceeds 0.05g/ml of alcohol in blood. The licenced driver data were the monthly number of registered licenced drivers spanning January 2000 to December 2011. Results: The data highlights that the Australian story does not reflective of all States and territories. The stable RBT to licenced driver ratio in Queensland (of 1:1) suggests a stable rate of alcohol related crash data of 5.5 per 100,000 licenced drivers. Yet, in South Australia were a relative stable rate of RBT to licenced driver ratio of 1:2 is maintained the rate of alcohol related traffic crashes is substantially less at 3.7 per 100,000. We use joinpoint regression techniques and varying regression models to fit the data and compare the different patterns between jurisdictions. Discussion: The results of this study provide an updated review and evaluation of RBTs conducted in Australia and examines the association between RBTs and alcohol related traffic crashes. We also present an evidence base to guide policy decisions for RBT operations.
Resumo:
Background Random Breath Testing (RBT) remains a central enforcement strategy to deter and apprehend drink drivers in Queensland (Australia). Despite this, there is little published research regarding the exact drink driving apprehension rates across the state as measured through RBT activities. Aims The aim of the current study was to examine the prevalence of apprehending drink drivers in urban versus rural areas. Methods The Queensland Police Service provided data relating to the number of RBT conducted and apprehensions for the period 1 January 2000 to 31 December 2011. Results In the period, 35,082,386 random breath tests (both mobile and stationary) were conducted in Queensland which resulted in 248,173 individuals being apprehended for drink driving offences. Overall drink driving apprehension rates appear to have decreased across time. Close examination of the data revealed that the highest proportion of drink driving apprehensions (when compared with RBT testing rates) was in the Northern and Far Northern regions of Queensland (e.g., rural areas). In contrast, the lowest proportions were observed within the two Brisbane metropolitan regions (e.g., urban areas). However, differences in enforcement styles across the urban and rural regions need to be considered. Discussion and conclusions The research presentation will further outline the major findings of the study in regards to maximising the efficiency of RBT operations both within urban and rural areas of Queensland, Australia.
Resumo:
In South and Southeast Asia, postharvest loss causes material waste of up to 66% in fruits and vegetables, 30% in oilseeds and pulses, and 49% in roots and tubers. The efficiency of postharvest equipment directly affects industrial-scale food production. To enhance current processing methods and devices, it is essential to analyze the responses of food materials under loading operations. Food materials undergo different types of mechanical loading during postharvest and processing stages. Therefore, it is important to determine the properties of these materials under different types of loads, such as tensile, compression, and indentation. This study presents a comprehensive analysis of the available literature on the tensile properties of different food samples. The aim of this review was to categorize the available methods of tensile testing for agricultural crops and food materials to investigate an appropriate sample size and tensile test method. The results were then applied to perform tensile tests on pumpkin flesh and peel samples, in particular on arc-sided samples at a constant loading rate of 20 mm min-1. The results showed the maximum tensile stress of pumpkin flesh and peel samples to be 0.535 and 1.45 MPa, respectively. The elastic modulus of the flesh and peel samples was 6.82 and 25.2 MPa, respectively, while the failure modulus values were 14.51 and 30.88 MPa, respectively. The results of the tensile tests were also used to develop a finite element model of mechanical peeling of tough-skinned vegetables. However, to study the effects of deformation rate, moisture content, and texture of the tissue on the tensile responses of food materials, more investigation needs to be done in the future.
Resumo:
This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.
Resumo:
Skin cancer is one of the most commonly occurring cancer types, with substantial social, physical, and financial burdens on both individuals and societies. Although the role of UV light in initiating skin cancer development has been well characterized, genetic studies continue to show that predisposing factors can influence an individual's susceptibility to skin cancer and response to treatment. In the future, it is hoped that genetic profiles, comprising a number of genetic markers collectively involved in skin cancer susceptibility and response to treatment or prognosis, will aid in more accurately informing practitioners' choices of treatment. Individualized treatment based on these profiles has the potential to increase the efficacy of treatments, saving both time and money for the patient by avoiding the need for extensive or repeated treatment. Increased treatment responses may in turn prevent recurrence of skin cancers, reducing the burden of this disease on society. Currently existing pharmacogenomic tests, such as those that assess variation in the metabolism of the anticancer drug fluorouracil, have the potential to reduce the toxic effects of anti-tumor drugs used in the treatment of non-melanoma skin cancer (NMSC) by determining individualized appropriate dosage. If the savings generated by reducing adverse events negate the costs of developing these tests, pharmacogenomic testing may increasingly inform personalized NMSC treatment.
Resumo:
Materials used in the engineering always contain imperfections or defects which significantly affect their performances. Based on the large-scale molecular dynamics simulation and the Euler–Bernoulli beam theory, the influence from different pre-existing surface defects on the bending properties of Ag nanowires (NWs) is studied in this paper. It is found that the nonlinear-elastic deformation, as well as the flexural rigidity of the NW is insensitive to different surface defects for the studied defects in this paper. On the contrary, an evident decrease of the yield strength is observed due to the existence of defects. In-depth inspection of the deformation process reveals that, at the onset of plastic deformation, dislocation embryos initiate from the locations of surface defects, and the plastic deformation is dominated by the nucleation and propagation of partial dislocations under the considered temperature. Particularly, the generation of stair-rod partial dislocations and Lomer–Cottrell lock are normally observed for both perfect and defected NWs. The generation of these structures has thwarted attempts of the NW to an early yielding, which leads to the phenomenon that more defects does not necessarily mean a lower critical force.
Resumo:
Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
A cross-sectional survey was conducted, and the construct validity and reliability of the Brisbane Practice Environment Measure in an Australian sample of registered nurses were examined. Nurses were randomly selected from the database of an Australian nursing organization. The original 33 items of the Brisbane Practice Environment Measure were utilized to inform the psychometric properties using confirmatory factor analysis. The Cronbach's alpha was 0.938 for the total scale and ranged 0.657–0.887 for the subscales. A five-factor structure of the measure was confirmed, χ2 = 944.622, (P < 0.01), χ2/d.f. ratio = 2.845, Tucker Lewis Index 0.929, Root Mean Square Error = 0.061 and Comparative Fit Index = 0.906. The selected 28 items of the measure proved reliable and valid in measuring effects of the practice environment upon Australian nurses. The implications are that regular measurement of the practice environment using these 28 items might assist in the development of strategies which might improve job satisfaction and retention of registered nurses in Australia.
Resumo:
The purpose of the study was to undertake rigorous psychometric testing of the Caring Efficacy Scale in a sample of Registered Nurses. A cross-sectional survey of 2000 registered nurses was undertaken. The Caring Efficacy Scale was utilised to inform the psychometric properties of the selected items of the Caring Efficacy Scale. Cronbach’s Alpha identified reliability of the data. Exploratory Factor Analysis and Confirmatory Factor Analysis were undertaken to validate the factors. Confirmatory factor analysis confirmed the development of two factors; Confidence to Care and Doubts and Concerns. The Caring Efficacy Scale has undergone rigorous psychometric testing, affording evidence of internal consistency and goodness-of-fit indices within satisfactory ranges. The Caring Efficacy Scale is valid for use in an Australian population of registered nurses. The scale can be used as a subscale or total score reflective of self-efficacy in nursing. This scale may assist nursing educators to predict levels of caring efficacy.
Resumo:
We conducted on-road and simulator studies to explore the mechanisms underpinning driver-rider crashes. In Study 1 the verbal protocols of 40 drivers and riders were assessed at intersections as part of a 15km on-road route in Melbourne. Network analysis of the verbal transcripts highlighted key differences in the situation awareness of drivers and riders at intersections. In a further study using a driving simulator we examined in car drivers the influence of acute exposure to motorcyclists. In a 15 min simulated drive, 40 drivers saw either no motorcycles or a high number of motorcycles in the surrounding traffic. In a subsequent 45-60 min drive, drivers were asked to detect motorcycles in traffic. The proportion of motorcycles was manipulated so that there was either a high (120) or low (6) number of motorcycles during the drive. Those drivers exposed to a high number of motorcycles were significantly faster at detecting motorcycles. Fundamentally, the incompatible situation awareness at intersections by drivers and riders underpins the conflicts. Study 2 offers some suggestion for a countermeasure here, although more research around schema and exposure training to support safer interactions is needed.
Resumo:
QUT Library continues to rethink research support with eResearch as a primary driver. The support to the development of the Lens, an open global cyberinfrastructure, has been especially important in the light of technology transfer promotion, and partly in the response to researchers’ needs in following the innovation landscapes not only within the scientific but also patent literature. The Lens http://www.lens.org/lens/ project makes innovation more efficient, fair, transparent and inclusive. It is a joint effort between Cambia http://www.cambia.org.au and Queensland University of Technology (QUT). The Lens serves more than 84 million patent documents in the world as open, annotatable digital public goods that are integrated with scholarly and technical literature along with regulatory and business data. Users can link from search results to visualization and document clusters; from a patent document description to its full-text; from there, if applicable, the sequence data can also be found. Figure 1 shows a BLAST Alignment (DNA) using the Lens. A unique feature of the Lens is the ability to embed search and BLAST results into blogs and websites, and provide real-time updates to them. PatSeq Explorer http://www.lens.org/lens/bio/patseqexplorer allows users to navigate patent sequences that map onto the human genome and in the future, many other genomes. PatSeq Explorer offers three level views for the sequence information and links each group of sequences at the chromosomal level to their corresponding patent documents in the Lens. By integrating sequence and patent search and document clustering capabilities, users can now understand the big and small details on the true extent and scope of genetic sequence patents. QUT Library supported Cambia in developing, testing and promoting the Lens. This poster demonstrates QUT Library’s provision of best practice and holistic research support to a research group and how QUT Librarians have acquired new capabilities to meet the needs of the researchers beyond traditional research support practices.