852 resultados para technology-enhanced assessment
Resumo:
A number of advanced driver assistance systems (ADAS) are currently being released on the market, providing safety functions to the drivers such as collision avoidance, adaptive cruise control or enhanced night-vision. These systems however are inherently limited by their sensory range: they cannot gather information from outside this range, also called their “perceptive horizon”. Cooperative systems are a developing research avenue that aims at providing extended safety and comfort functionalities by introducing vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications to the road actors. This paper presents the problematic of cooperative systems, their advantages and contributions to road safety and exposes some limitations related to market penetration, sensors accuracy and communications scalability. It explains the issues of how to implement extended perception, a central contribution of cooperative systems. The initial steps of an evaluation of data fusion architectures for extended perception are exposed.
Resumo:
Bridges are an important part of a nation’s infrastructure and reliable monitoring methods are necessary to ensure their safety and efficiency. Most bridges in use today were built decades ago and are now subjected to changes in load patterns that can cause localized distress, which can result in bridge failure if not corrected. Early detection of damage helps in prolonging lives of bridges and preventing catastrophic failures. This paper briefly reviews the various technologies currently used in health monitoring of bridge structures and in particular discusses the application and challenges of acoustic emission (AE) technology. Some of the results from laboratory experiments on a bridge model are also presented. The main objectives of these experiments are source localisation and assessment. The findings of the study can be expected to enhance the knowledge of acoustic emission process and thereby aid in the development of an effective bridge structure diagnostics system.
Resumo:
Background--The admission and assessment of patients for elective procedures is a task faced by all healthcare organisations that provide elective surgical services. Several different strategies have been used to facilitate the management of these tasks. Nurse-led preadmission clinics or services have been implemented in many health services as one of these management strategies; however their effectiveness has not been established. Objectives--The objective of this review was to examine the available research on the effectiveness of nurse-led elective surgery preoperative assessment clinics or services on patient outcomes.--Results--Of the 19 included articles, there were 10 audits of patient and hospital data, 3 surveys or questionnaires, 3 descriptive studies, 1 action research design, 1 prospective observational study and 1 RCT. Five of ten studies reporting data on cancellations rates found that nurse-led preadmission services reduced the number of day-of-surgery cancellations. Non-attendance for surgery was also reduced, with nine studies reporting decreases in the number of patients failing to attend. Eight studies reporting data on patient or parent satisfaction found high levels of satisfaction with nurse-led preadmission services. Three of four studies investigating the effect of the nurse-led preadmission service on patient anxiety found a reduction in reported anxiety levels. Three studies found that preoperative preparation was enhanced by the use of a nurse-led preadmission service.--Conclusions--While all included studies reported evidence of effectiveness for nurse-led preadmission services on a wide range of outcomes for elective surgery patients, the lack of experimental trials means that the level of evidence is low, and further research is needed.--Implications for practice--Nurse-led preadmission services may be an effective strategy for reducing procedural cancellations, failure to attend for procedures, and patient anxiety, however currently the evidence level is low.
Resumo:
The detection and potential treatment of oxidative stress in biological systems has been explored using isoindoline-based nitroxide radicals. A novel tetraethyl-fluorescein nitroxide was synthesised for its use as a profluorescent probe for redox processes in biological systems. This tetraethyl system, as well as a tetramethyl-fluorescein nitroxide, were shown to be sensitive and selective probes for superoxide in vitro. The redox environment of cellular systems was also explored using the tetramethylfluorescein species based on its reduction to the hydroxylamine. Flow cytometry was employed to assess the extent of nitroxide reduction, reflecting the overall cellular redox environment. Treatment of normal fibroblasts with rotenone and 2-deoxyglucose resulted in an oxidising cellular environment as shown by the lack of reduction of the fluorescein-nitroxide system. Assessment of the tetraethyl-fluorescein nitroxide system in the same way demonstrated its enhanced resistance to reduction and offers the potential to detect and image biologically relevant reactive oxygen species directly. Importantly, these profluorescent nitroxide compounds were shown to be more effective than the more widely used and commercially available probes for reactive oxygen species such as 2’,7’-dichlorodihydrofluorescein diacetate. Fluorescence imaging of the tetramethyl-fluorescein nitroxide and a number of other rhodamine-nitroxide derivatives was undertaken, revealing the differential cellular localisation of these systems and thus their potential for the detection of redox changes in specific cellular compartments. As well as developing novel methods for the detection of oxidative stress, a number of novel isoindoline nitroxides were synthesised for their potential application as small-molecule antioxidants. These compounds incorporated known pharmacophores into the isoindoline-nitroxide structure in an attempt to increase their efficacy in biological systems. A primary and a secondary amine nitroxide were synthesised which incorporated the phenethylamine backbone of the sympathomimetic amine class of drugs. Initial assessment of the novel primary amine derivative indicated a protective effect comparable to that of 5-carboxy-1,1,3,3- tetramethylisoindolin-2-yloxyl. Methoxy-substituted nitroxides were also synthesised as potential antioxidants for their structural similarity to some amphetamine type stimulants. A copper-catalysed methodology provided access to both the mono- and di-substituted methoxy-nitroxides. Deprotection of the ethers in these compounds using boron tribromide successfully produced a phenolnitroxide, however the catechol moiety in the disubstituted derivative appeared to undergo reaction with the nitroxide to produce quinone-like degradation products. A novel fluoran-nitroxide was also synthesised from the methoxy-substituted nitroxide, providing a pH-sensitive spin probe. An amino-acid precursor containing a nitroxide moiety was also synthesised for its application as a dual-action antioxidant. N-Acetyl protection of the nitroxide radical was necessary prior to the Erlenmeyer reaction with N-acetyl glycine. Hydrolysis and reduction of the azlactone intermediate produced a novel amino acid precursor with significant potential as an effective antioxidant.
Resumo:
An approach aimed at enhancing learning by matching individual students' preferred cognitive styles to computer-based instructional (CBI) material is presented. This approach was used in teaching some components of a third-year unit in an electrical engineering course at the Queensland University of Technology. Cognitive style characteristics of perceiving and processing information were considered. The bimodal nature of cognitive styles (analytic/imager, analytic/verbalizer, wholist/imager and wholist/verbalizer) was examined in order to assess the full ramification of cognitive styles on learning. In a quasi-experimental format, students' cognitive styles were analysed by cognitive style analysis (CSA) software. On the basis of the CSA results the system defaulted students to either matched or mismatched CBI material. The consistently better performance by the matched group suggests potential for further investigations where the limitations cited in this paper are eliminated. Analysing the differences between cognitive styles on individual test tasks also suggests that certain test tasks may better suit certain cognitive styles.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
Large mysticete whales represent a unique challenge for chemical risk assessment. Few epidemiological investigations are possible due to the low incidence of adult stranding events. Similarly their often extreme life-history adaptations of prolonged migration and fasting challenge exposure assumptions. Molecular biomarkers offer the potential to complement information yielded through tissue chemical analysis, as well as providing evidence of a molecular response to chemical exposure. In this study we confirm the presence of cytochrome P450 reductase (CPR) and cytochrome P450 isoenzyme 1A1 (CYP1A1) in epidermal tissue of southern hemisphere humpback whales (Megaptera novaeangliae). The detection of CYP1A1 in the integument of the humpback whale affords the opportunity for further quantitative non-destructive investigations of enzyme activity as a function of chemical stress.
Resumo:
Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.
Resumo:
In today's technological age, fraud has become more complicated, and increasingly more difficult to detect, especially when it is collusive in nature. Different fraud surveys showed that the median loss from collusive fraud is much greater than fraud perpetrated by a single person. Despite its prevalence and potentially devastating effects, collusion is commonly overlooked as an organizational risk. Internal auditors often fail to proactively consider collusion in their fraud assessment and detection efforts. In this paper, we consider fraud scenarios with collusion. We present six potentially collusive fraudulent behaviors and show their detection process in an ERP system. We have enhanced our fraud detection framework to utilize aggregation of different sources of logs in order to detect communication and have further enhanced it to render it system-agnostic thus achieving portability and making it generally applicable to all ERP systems.
Resumo:
Becoming a teacher in technology-rich classrooms is a complex and challenging transition for career-change entrants. Those with generic or specialist Information and Communication Technology (ICT) expertise bring a mindset about purposeful uses of ICT that enrich student learning and school communities. The transition process from a non-education environment is both enhanced and constrained by shifting the technology context of generic or specialist ICT expertise, developed through a former career as well as general life experience. In developing an understanding of the complexity of classrooms and creating a learner centred way of working, perceptions about learners and learning evolve and shift. Shifts in thinking about how ICT expertise supports learners and enhances learning preceded shifts in perceptions about being a teacher, working with colleagues, and functioning in schools that have varying degrees of intensity and impact on evolving professional identities. Current teacher education and school induction programs are seen to be falling short of meeting the needs of career-change entrants and, as a flow on, the students they nurture. Research (see, for example, Tigchelaar, Brouwer, & Korthagen, 2008; Williams & Forgasz, 2009) highlights the value of generic and specialist expertise career-change teachers bring to the profession and draws attention to the challenges such expertise begets (Anthony & Ord, 2008; Priyadharshini & Robinson-Pant, 2003). As such, the study described in this thesis investigated perceptions of career-change entrants, who have generic (Mishra & Koehler, 2006) or specialist expertise, that is, ICT qualifications and work experience in the use of ICT. The career-change entrants‘ perceptions were sought as they shifted the technology context and transitioned into teaching in technology-rich classrooms. The research involved an interpretive analysis of qualitative data and quantitative data. The study used the explanatory case study (Yin, 1994) methodology enriched through grounded theory processes (Strauss & Corbin, 1998), to develop a theory about professional identity transition from the perceptions of the participants in the study. The study provided insights into the expertise and experiences of career change entrants, particularly in relation to how professional identities that include generic and specialist ICT knowledge and expertise were reconfigured while transitioning into the teaching profession. This thesis presents the Professional Identity Transition Theory that encapsulates perceptions about teaching in technology-rich classrooms amongst a selection of the increasing number of career-change entrants. The theory, grounded in the data, (Strauss & Corbin, 1998) proposes that career-change entrants experience transition phases of varying intensity that impact on professional identity, retention and development as a teacher. These phases are linked to a shift in perceptions rather than time as a teacher. Generic and specialist expertise in the use of ICT is a weight of the past and an asset that makes the transition process more challenging for career-change entrants. The study showed that career-change entrants used their experiences and perceptions to develop a way of working in a school community. Their way of working initially had an adaptive orientation focussed on immediate needs as their teaching practice developed. Following a shift of thinking, more generative ways of working focussed on the future emerged to enable continual enhancement and development of practice. Sustaining such learning is a personal, school and systemic challenge for the teaching profession.
Resumo:
Automobiles have deeply impacted the way in which we travel but they have also contributed to many deaths and injury due to crashes. A number of reasons for these crashes have been pointed out by researchers. Inexperience has been identified as a contributing factor to road crashes. Driver’s driving abilities also play a vital role in judging the road environment and reacting in-time to avoid any possible collision. Therefore driver’s perceptual and motor skills remain the key factors impacting on road safety. Our failure to understand what is really important for learners, in terms of competent driving, is one of the many challenges for building better training programs. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. A multidisciplinary approach is necessary to explain how driving abilities evolves with on-road driving experience. To our knowledge, driver assistance systems have never been comprehensively used in a driver training context to assess the safety aspect of driving. The aim and novelty of this thesis is to develop and evaluate an Intelligent Driver Training System (IDTS) as an automated assessment tool that will help drivers and their trainers to comprehensively view complex driving manoeuvres and potentially provide effective feedback by post processing the data recorded during driving. This system is designed to help driver trainers to accurately evaluate driver performance and has the potential to provide valuable feedback to the drivers. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the driving tasks. Therefore, the proposed IDTS utilizes fuzzy set theory for the assessment of driver performance. The proposed research program focuses on integrating the multi-sensory information acquired from the vehicle, driver and environment to assess driving competencies. After information acquisition, the current research focuses on automated segmentation of the selected manoeuvres from the driving scenario. This leads to the creation of a model that determines a “competency” criterion through the driving performance protocol used by driver trainers (i.e. expert knowledge) to assess drivers. This is achieved by comprehensively evaluating and assessing the data stream acquired from multiple in-vehicle sensors using fuzzy rules and classifying the driving manoeuvres (i.e. overtake, lane change, T-crossing and turn) between low and high competency. The fuzzy rules use parameters such as following distance, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvres to assess competency. These rules that identify driving competency were initially designed with the help of expert’s knowledge (i.e. driver trainers). In-order to fine tune these rules and the parameters that define these rules, a driving experiment was conducted to identify the empirical differences between novice and experienced drivers. The results from the driving experiment indicated that significant differences existed between novice and experienced driver, in terms of their gaze pattern and duration, speed, stop time at the T-crossing, lane keeping and the time spent in lanes while performing the selected manoeuvres. These differences were used to refine the fuzzy membership functions and rules that govern the assessments of the driving tasks. Next, this research focused on providing an integrated visual assessment interface to both driver trainers and their trainees. By providing a rich set of interactive graphical interfaces, displaying information about the driving tasks, Intelligent Driver Training System (IDTS) visualisation module has the potential to give empirical feedback to its users. Lastly, the validation of the IDTS system’s assessment was conducted by comparing IDTS objective assessments, for the driving experiment, with the subjective assessments of the driver trainers for particular manoeuvres. Results show that not only IDTS was able to match the subjective assessments made by driver trainers during the driving experiment but also identified some additional driving manoeuvres performed in low competency that were not identified by the driver trainers due to increased mental workload of trainers when assessing multiple variables that constitute driving. The validation of IDTS emphasized the need for an automated assessment tool that can segment the manoeuvres from the driving scenario, further investigate the variables within that manoeuvre to determine the manoeuvre’s competency and provide integrated visualisation regarding the manoeuvre to its users (i.e. trainers and trainees). Through analysis and validation it was shown that IDTS is a useful assistance tool for driver trainers to empirically assess and potentially provide feedback regarding the manoeuvres undertaken by the drivers.
Resumo:
We present the findings of a study into the implementation of explicitly criterion- referenced assessment in undergraduate courses in mathematics. We discuss students' concepts of criterion referencing and also the various interpretations that this concept has among mathematics educators. Our primary goal was to move towards a classification of criterion referencing models in quantitative courses. A secondary goal was to investigate whether explicitly presenting assessment criteria to students was useful to them and guided them in responding to assessment tasks. The data and feedback from students indicates that while students found the criteria easy to understand and useful in informing them as to how they would be graded, it did not alter the way the actually approached the assessment activity.
Resumo:
About 1.6 million students currently study outside their home country. Despite this, and the fact that Australia, the United States, the United Kingdom and many of the other host countries of international students are themselves extremely culturally diverse communities, business education remains essentially mono-cultural in form and Anglo American in content. Whilst it is true that these international students may want to understand the "Western" way of doing things, they may not be familiar or comfortable with the processes used to facilitate learning. This paper explores a project undertaken to create a tool that provides essential pre-orientation information and advice to students before they leave home. Where cultural adjustment is required, catching students before departure is a very effective time to introduce key information about lifestyle, culture and approaches to teaching and learning that would assist students with the complex and difficult adjustment to studying abroad, so that they could make a smoother transition to their new place of learning. Welcome to Studying Business at QUT is a Data DVD with 19 short videos capturing a student perspective on life and study. Forty percent of the content is related to living and studying and includes sections on accommodation, lifestyle, food and transport etc., and 60% takes an in-depth look at studying business, featuring students and academics talking about issues such as assessment, academic writing and working in groups. This paper outlines the process of developing the DVD and the range of issues addressed.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.