905 resultados para Off-line training
Resumo:
The draft Year 1 Literacy and Numeracy Checkpoints Assessments were in open and supported trial during Semester 2, 2010. The purpose of these trials was to evaluate the Year 1 Literacy and Numeracy Checkpoints Assessments (hereafter the Year 1 Checkpoints) that were designed in 2009 as a way to incorporate the use of the Year 1 Literacy and Numeracy Indicators as formative assessment in Year 1 in Queensland Schools. In these trials there were no mandated reporting requirements. The processes of assessment were related to future teaching decisions. As such the trials were trials of materials and the processes of using those materials to assess students, plan and teach in year 1 classrooms. In their current form the Year 1 Checkpoints provide assessment resources for teachers to use in February, June and October. They aim to support teachers in monitoring children's progress and making judgments about their achievement of the targeted P‐3 Literacy and Numeracy Indicators by the end of Year 1 (Queensland Studies Authority, 2010 p. 1). The Year 1 Checkpoints include support materials for teachers and administrators, an introductory statement on assessment, work samples, and a Data Analysis Assessment Record (DAAR) to record student performance. The Supported Trial participants were also supported with face‐to‐face and on‐line training sessions, involvement in a moderation process after the October Assessments, opportunities to participate in discussion forums as well as additional readings and materials. The assessment resources aim to use effective early years assessment practices in that the evidence is gathered from hands‐on teaching and learning experiences, rather than more formal assessment methods. They are based in a model of assessment for learning, and aim to support teachers in the “on‐going process of determining future learning directions” (Queensland Studies Authority, 2010 p. 1) for all students. Their aim is to focus teachers on interpreting and analysing evidence to make informed judgments about the achievement of all students, as a way to support subsequent planning for learning and teaching. The Evaluation of the Year 1 Literacy and Numeracy Checkpoints Assessments Supported Trial (hereafter the Evaluation) aimed to gather information about the appropriateness, effectiveness and utility of the Year 1 Checkpoints Assessments from early years’ teachers and leaders in up to one hundred Education Queensland schools who had volunteered to be part of the Supported Trial. These sample schools represent schools across a variety of Education Queensland regions and include schools with: - A high Indigenous student population; - Urban, rural and remote school locations; - Single and multi‐age early phase classes; - A high proportion of students from low SES backgrounds. The purpose of the Evaluation was to: Evaluate the materials and report on the views of school‐based staff involved in the trial on the process, materials, and assessment practices utilised. The Evaluation has reviewed the materials, and used surveys, interviews, and observations of processes and procedures to collect relevant data to help present an informed opinion on the Year 1 Checkpoints as assessment for the early years of schooling. Student work samples and teacher planning and assessment documents were also collected. The evaluation has not evaluated the Year 1 Checkpoints in any other capacity than as a resource for Year 1 teachers and relevant support staff.
Resumo:
It is well accepted that different types of distributed architectures require different degrees of coupling. For example, in client-server and three-tier architectures, application components are generally tightly coupled, both with one another and with the underlying middleware. Meanwhile, in off-line transaction processing, grid computing and mobile applications, the degree of coupling between application components and with the underlying middleware needs to be minimized. Terms such as ‘synchronous’, ‘asynchronous’, ‘blocking’, ‘non-blocking’, ‘directed’, and ‘non-directed’ are often used to refer to the degree of coupling required by an architecture or provided by a middleware. However, these terms are used with various connotations. Although various informal definitions have been provided, there is a lack of an overarching formal framework to unambiguously communicate architectural requirements with respect to (de-)coupling. This article addresses this gap by: (i) formally defining three dimensions of (de-)coupling; (ii) relating these dimensions to existing middleware; and (iii) proposing notational elements to represent various coupling integration patterns. This article also discusses a prototype that demonstrates the feasibility of its implementation.
Resumo:
The overall aim of this project was to contribute to existing knowledge regarding methods for measuring characteristics of airborne nanoparticles and controlling occupational exposure to airborne nanoparticles, and to gather data on nanoparticle emission and transport in various workplaces. The scope of this study involved investigating the characteristics and behaviour of particles arising from the operation of six nanotechnology processes, subdivided into nine processes for measurement purposes. It did not include the toxicological evaluation of the aerosol and therefore, no direct conclusion was made regarding the health effects of exposure to these particles. Our research included real-time measurement of sub, and supermicrometre particle number and mass concentration, count median diameter, and alveolar deposited surface area using condensation particle counters, an optical particle counter, DustTrak photometer, scanning mobility particle sizer, and nanoparticle surface area monitor, respectively. Off-line particle analysis included scanning and transmission electron microscopy, energy-dispersive x-ray spectrometry, and thermal optical analysis of elemental carbon. Sources of fibrous and non-fibrous particles were included.
Resumo:
A new control method for battery storage to maintain acceptable voltage profile in autonomous microgrids is proposed in this article. The proposed battery control ensures that the bus voltages in the microgrid are maintained during disturbances such as load change, loss of micro-sources, or distributed generations hitting power limit. Unlike the conventional storage control based on local measurements, the proposed method is based on an advanced control technique, where the reference power is determined based on the voltage drop profile at the battery bus. An artificial neural network based controller is used to determine the reference power needed for the battery to hold the microgrid voltage within regulation limits. The pattern of drop in the local bus voltage during power imbalance is used to train the controller off-line. During normal operation, the battery floats with the local bus voltage without any power injection. The battery is charged or discharged during the transients with a high gain feedback loop. Depending on the rate of voltage fall, it is switched to power control mode to inject the reference power determined by the proposed controller. After a defined time period, the battery power injection is reduced to zero using slow reverse-droop characteristics, ensuring a slow rate of increase in power demand from the other distributed generations. The proposed control method is simulated for various operating conditions in a microgrid with both inertial and converter interfaced sources. The proposed battery control provides a quick load pick up and smooth load sharing with the other micro-sources in a disturbance. With various disturbances, maximum voltage drop over 8% with conventional energy storage is reduced within 2.5% with the proposed control method.
Resumo:
Driver distraction has recently been defined by Regan as "the diversion of attention away from activities critical for safe driving toward a competing activity, which may result in insufficient or no attention to activities critical for safe driving (Regan, Hallett & Gordon, 2011, p.1780)". One source of distraction is in-vehicle devices, even though they might provide other benefits, e.g. navigation systems. Currently, eco-driving systems have been growing rapidly in popularity. These systems send messages to drivers so that driving performance can be improved in terms of fuel efficiency. However, there remain unanswered questions about whether eco-driving systems endanger drivers by distracting them. In this research, the CARRS-Q advanced driving simulator was used in order to provide safety for participants and meanwhile simulate real world driving. The distraction effects of tasks involving three different in-vehicle systems were investigated: changing a CD, entering a five digit number as a part of navigation task and responding to an eco-driving task. Driving in these scenarios was compared with driving in the absence of these distractions, and while drivers engaged in critical manoeuvres. In order to account for practice effects, the same scenarios were duplicated on a second day. The three in-vehicle systems were not the exact facsimiles of any particular existing system, but were designed to have similar characteristics to those of system available. In general, the results show that drivers’ mental workloads are significantly higher in navigation and CD changing scenarios in comparison to the two other scenarios, which implies that these two tasks impose more visual/manual and cognitive demands. However, eco-driving mental workload is still high enough to be called marginally significant (p ~ .05) across manoeuvres. Similarly, event detection tasks show that drivers miss significantly more events in the navigation and CD changing scenarios in comparison to both the baseline and eco-driving scenario across manoeuvres. Analysis of the practice effect shows that drivers’ baseline scenario and navigation scenario exhibit significantly less demand on the second day. However, the number of missed events across manoeuvres confirmed that drivers can detect significantly more events on the second day for all scenarios. Distraction was also examined separately for five groups of manoeuvres (straight, lane changing, overtaking, braking for intersections and braking for roundabouts), in two locations for each condition. Repeated measures mixed ANOVA results show that reading an eco-driving message can potentially impair driving performance. When comparing the three in–vehicle distractions tested, attending to an eco-driving message is similar in effect to the CD changing task. The navigation task degraded driver performance much more than these other sources of distraction. In lane changing manoeuvres, drivers’ missed response counts degraded when they engaged in reading eco-driving messages at the first location. However, drivers’ event detection abilities deteriorated less at the second lane changing location. In baseline manoeuvres (driving straight), participants’ mean minimum speed degraded more in the CD changing scenario. Drivers’ lateral position shifted more in both CD changing and navigation tasks in comparison with both eco-driving and baseline scenarios, so they were more visually distracting. Participants were better at event detection in baseline manoeuvres in comparison with other manoeuvres. When approaching an intersection, the navigation task caused more events to be missed by participants, whereas eco-driving messages seemed to make drivers less distracted. The eco-driving message scenario was significantly less distracting than the navigation system scenario (fewer missed responses) when participants commenced braking for roundabouts. To sum up, in spite of the finding that two other in-vehicle tasks are more distracting than the eco-driving task, the results indicate that even reading a simple message while driving could potentially lead to missing an important event, especially when executing critical manoeuvres. This suggests that in-vehicle eco-driving systems have the potential to contribute to increased crash risk through distraction. However, there is some evidence of a practice effect which suggests that future research should focus on performance with habitual rather than novel tasks. It is recommended that eco-driving messages be delivered to drivers off-line when possible.
Resumo:
The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.
Resumo:
In some parts of Australia, people wanting to learn to ride a motorcycle are required to complete an off-road training course before they are allowed to practice on the road. In the state of Queensland, they are only required to pass a short multiple-choice road rules knowledge test. This paper describes an analysis of police-reported crashes involving learner riders in Queensland that was undertaken as part of research investigating whether pre-learner training is needed and, if so, the issues that should be addressed in training.. The crashes of learner riders and other riders were compared to identify whether there are particular situations or locations in which learner motorcyclists are over-involved in crashes, which could then be targeted in the pre-learner package. The analyses were undertaken separately for riders aged under 25 (330 crashes) versus those aged 25 and over (237 crashes) to provide some insight into whether age or riding inexperience are the more important factors, and thus to indicate whether there are merits in having different licensing or training approaches for younger and older learner riders. Given that the average age of learner riders was 33 years, under 25 was chosen to provide a sufficiently large sample of younger riders. Learner riders appeared to be involved in more severe crashes and to be more often at fault than fully-licensed riders but this may reflect problems in reporting, rather than real differences. Compared to open licence holders, both younger and older learner riders had relatively more crashes in low speed zones and relatively fewer in high speed zones. Riders aged under 25 had elevated percentages of night-time crashes and fewer single unit (potentially involving rider error only) crashes regardless of the type of licence held. The contributing factors that were more prevalent in crashes of learner riders than holders of open licences were: inexperience (37.2% versus 0.5%), inattention (21.5% versus 15.6%), alcohol or drugs (12.0% versus 5.1%) and drink riding (9.9% versus 3.1%). The pattern of contributing factors was generally similar for younger and older learner riders, although younger learners were (not surprisingly) more likely to have inexperience coded as a contributing factor (49.7% versus 19.8%). Some of the differences in crashes between learner riders and fully-licensed riders appear to reflect relatively more riding in urban areas by learners, rather than increased risks relating to inexperience. The analysis of contributing factors in learner rider crashes suggests that hazard perception and risk management (in terms of speed and alcohol and drugs) should be included in a pre-learner program. Currently, most learner riders in Queensland complete pre-licence training and become licensed within one month of obtaining their learner permit. If the introduction of pre-learner training required that the learner permit was held for a minimum duration, then the immediate effect might be more learners riding (and crashing). Thus, it is important to consider how training and licensing initiatives work together in order to improve the safety of new riders (and how this can be evaluated).
Resumo:
Safety concerns in the operation of autonomous aerial systems require safe-landing protocols be followed during situations where the a mission should be aborted due to mechanical or other failure. On-board cameras provide information that can be used in the determination of potential landing sites, which are continually updated and ranked to prevent injury and minimize damage. Pulse Coupled Neural Networks have been used for the detection of features in images that assist in the classification of vegetation and can be used to minimize damage to the aerial vehicle. However, a significant drawback in the use of PCNNs is that they are computationally expensive and have been more suited to off-line applications on conventional computing architectures. As heterogeneous computing architectures are becoming more common, an OpenCL implementation of a PCNN feature generator is presented and its performance is compared across OpenCL kernels designed for CPU, GPU and FPGA platforms. This comparison examines the compute times required for network convergence under a variety of images obtained during unmanned aerial vehicle trials to determine the plausibility for real-time feature detection.
Resumo:
The detailed characterization of protein N-glycosylation is very demanding given the many different glycoforms and structural isomers that can exist on glycoproteins. Here we report a fast and sensitive method for the extensive structure elucidation of reducing-end labeled N-glycan mixtures using a combination of capillary normal-phase HPLC coupled off-line to matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) and TOF/TOF-MS/MS. Using this method, isobaric N-glycans released from honey bee phospholipase A2 and Arabidopsis thaliana glycoproteins were separated by normal-phase chromatography and subsequently identified by key fragment ions in the MALDI-TOF/TOF tandem mass spectra. In addition, linkage and branching information were provided by abundant cross-ring and "elimination" fragment ions in the MALDI-CID spectra that gave extensive structural information. Furthermore, the fragmentation characteristics of N-glycans reductively aminated with 2-aminobenzoic acid and 2-aminobenzamide were compared. The identification of N-glycans containing 3-linked core fucose was facilitated by distinctive ions present only in the MALDI-CID spectra of 2-aminobenzoic acid-labeled oligosaccharides. To our knowledge, this is the first MS/MS-based technique that allows confident identification of N-glycans containing 3-linked core fucose, which is a major allergenic determinant on insect and plant glycoproteins.
Resumo:
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific sub-regions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.
Resumo:
Multiple-time signatures are digital signature schemes where the signer is able to sign a predetermined number of messages. They are interesting cryptographic primitives because they allow to solve many important cryptographic problems, and at the same time offer substantial efficiency advantage over ordinary digital signature schemes like RSA. Multiple-time signature schemes have found numerous applications, in ordinary, on-line/off-line, forward-secure signatures, and multicast/stream authentication. We propose a multiple-time signature scheme with very efficient signing and verifying. Our construction is based on a combination of one-way functions and cover-free families, and it is secure against the adaptive chosen-message attack.
Resumo:
A secure protocol for electronic, sealed-bid, single item auctions is presented. The protocol caters to both first and second price (Vickrey) auctions and provides full price flexibility. Both computational and communication cost are linear with the number of bidders and utilize only standard cryptographic primitives. The protocol strictly divides knowledge of the bidder's identity and their actual bids between, respectively, a registration authority and an auctioneer, who are assumed not to collude but may be separately corrupt. This assures strong bidder-anonymity, though only weak bid privacy. The protocol is structured in two phases, each involving only off-line communication. Registration, requiring the use of the public key infrastructure, is simultaneous with hash-sealed bid-commitment and generates a receipt to the bidder containing a pseudonym. This phase is followed by encrypted bid-submission. Both phases involve the registration authority acting as a communication conduit but the actual message size is quite small. It is argued that this structure guarantees non-repudiation by both the winner and the auctioneer. Second price correctness is enforced either by observing the absence of registration of the claimed second-price bid or, where registered but lower than the actual second price, is subject to cooperation by the second price bidder - presumably motivated through self-interest. The use of the registration authority in other contexts is also considered with a view to developing an architecture for efficient secure multiparty transactions
Resumo:
Purpose: The purpose of this paper is to review, critique and develop a research agenda for the Elaboration Likelihood Model (ELM). The model was introduced by Petty and Cacioppo over three decades ago and has been modified, revised and extended. Given modern communication contexts, it is appropriate to question the model’s validity and relevance. Design/methodology/approach: The authors develop a conceptual approach, based on a fully comprehensive and extensive review and critique of ELM and its development since its inception. Findings: This paper focuses on major issues concerning the ELM. These include model assumptions and its descriptive nature; continuum questions, multi-channel processing and mediating variables before turning to the need to replicate the ELM and to offer recommendations for its future development. Research limitations/implications: This paper offers a series of questions in terms of research implications. These include whether ELM could or should be replicated, its extension, a greater conceptualization of argument quality, an explanation of movement along the continuum and between central and peripheral routes to persuasion, or to use new methodologies and technologies to help better understanding consume thinking and behaviour? All these relate to the current need to explore the relevance of ELM in a more modern context. Practical implications: It is time to question the validity and relevance of the ELM. The diversity of on- and off-line media options and the variants of consumer choice raise significant issues. Originality/value: While the ELM model continues to be widely cited and taught as one of the major cornerstones of persuasion, questions are raised concerning its relevance and validity in 21st century communication contexts.
Resumo:
Halevi and Krawczyk proposed a message randomization algorithm called RMX as a front-end tool to the hash-then-sign digital signature schemes such as DSS and RSA in order to free their reliance on the collision resistance property of the hash functions. They have shown that to forge a RMX-hash-then-sign signature scheme, one has to solve a cryptanalytical task which is related to finding second preimages for the hash function. In this article, we will show how to use Dean’s method of finding expandable messages for finding a second preimage in the Merkle-Damgård hash function to existentially forge a signature scheme based on a t-bit RMX-hash function which uses the Davies-Meyer compression functions (e.g., MD4, MD5, SHA family) in 2 t/2 chosen messages plus 2 t/2 + 1 off-line operations of the compression function and similar amount of memory. This forgery attack also works on the signature schemes that use Davies-Meyer schemes and a variant of RMX published by NIST in its Draft Special Publication (SP) 800-106. We discuss some important applications of our attack.
Resumo:
This paper describes a software architecture for real-world robotic applications. We discuss issues of software reliability, testing and realistic off-line simulation that allows the majority of the automation system to be tested off-line in the laboratory before deployment in the field. A recent project, the automation of a very large mining machine is used to illustrate the discussion.