774 resultados para real world learning
Resumo:
Curriculum developers and researchers have promoted context-based programmes to arrest waning student interest and participation in the enabling sciences at high school and university. Context-based programmes aim for student connections between scientific discourse and real-world contexts to elevate curricular relevance without diminishing conceptual understanding. This interpretive study explored the learning transactions in one 11th grade context-based chemistry classroom where the context was the local creek. The dialectic of agency/structure was used as a lens to examine how the practices in classroom interactions afforded students the agency for learning. The results suggest that first, fluid transitions were evident in the student–student interactions involving successful students; and second, fluid transitions linking concepts to context were evident in the students’ successful reports. The study reveals that the structures of writing and collaborating in groups enabled students’ agential and fluent movement between the field of the real-world creek and the field of the formal chemistry classroom. Furthermore, characteristics of academically successful students in context-based chemistry are highlighted. Research, teaching, and future directions for context-based science teaching are discussed.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
This paper investigates engaging experienced birders, as volunteer citizen scientists, to analyze large recorded audio datasets gathered through environmental acoustic monitoring. Although audio data is straightforward to gather, automated analysis remains a challenging task; the existing expertise, local knowledge and motivation of the birder community can complement computational approaches and provide distinct benefits. We explored both the culture and practice of birders, and paradigms for interacting with recorded audio data. A variety of candidate design elements were tested with birders. This study contributes an understanding of how virtual interactions and practices can be developed to complement existing practices of experienced birders in the physical world. In so doing this study contributes a new approach to engagement in e-science. Whereas most citizen science projects task lay participants with discrete real world or artificial activities, sometimes using extrinsic motivators, this approach builds on existing intrinsically satisfying practices.
Resumo:
Vehicular accidents are one of the deadliest safety hazards and accordingly an immense concern of individuals and governments. Although, a wide range of active autonomous safety systems, such as advanced driving assistance and lane keeping support, are introduced to facilitate safer driving experience, these stand-alone systems have limited capabilities in providing safety. Therefore, cooperative vehicular systems were proposed to fulfill more safety requirements. Most cooperative vehicle-to-vehicle safety applications require relative positioning accuracy of decimeter level with an update rate of at least 10 Hz. These requirements cannot be met via direct navigation or differential positioning techniques. This paper studies a cooperative vehicle platform that aims to facilitate real-time relative positioning (RRP) among adjacent vehicles. The developed system is capable of exchanging both GPS position solutions and raw observations using RTCM-104 format over vehicular dedicated short range communication (DSRC) links. Real-time kinematic (RTK) positioning technique is integrated into the system to enable RRP to be served as an embedded real-time warning system. The 5.9 GHz DSRC technology is adopted as the communication channel among road-side units (RSUs) and on-board units (OBUs) to distribute GPS corrections data received from a nearby reference station via the Internet using cellular technologies, by means of RSUs, as well as to exchange the vehicular real-time GPS raw observation data. Ultimately, each receiving vehicle calculates relative positions of its neighbors to attain a RRP map. A series of real-world data collection experiments was conducted to explore the synergies of both DSRC and positioning systems. The results demonstrate a significant enhancement in precision and availability of relative positioning at mobile vehicles.
Resumo:
The existence of Macroscopic Fundamental Diagram (MFD), which relates space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. One of the key requirements for well-defined MFD is the homogeneity of the area-wide traffic condition with links of similar properties, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take the impact of drivers’ behaviour and information provision into account, which has a significant impact on simulation outputs. This research aims to demonstrate the effect of dynamic information provision on network performance by employing the MFD as a measurement. A microscopic simulation, AIMSUN, is chosen as an experiment platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers different scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance with respect to the MFD shape as well as other indicators, such as total travel time. This study confirmed the impact of information provision on the MFD shape, and addressed the usefulness of the MFD for measuring the dynamic information provision benefit.
Resumo:
This thesis explored the knowledge and reasoning of young children in solving novel statistical problems, and the influence of problem context and design on their solutions. It found that young children's statistical competencies are underestimated, and that problem design and context facilitated children's application of a wide range of knowledge and reasoning skills, none of which had been taught. A qualitative design-based research method, informed by the Models and Modeling perspective (Lesh & Doerr, 2003) underpinned the study. Data modelling activities incorporating picture story books were used to contextualise the problems. Children applied real-world understanding to problem solving, including attribute identification, categorisation and classification skills. Intuitive and metarepresentational knowledge together with inductive and probabilistic reasoning was used to make sense of data, and beginning awareness of statistical variation and informal inference was visible.
Resumo:
Middle school is a crucial area of education where adolescents experiencing physiological and psychological changes, and require expert guidance. As more research evidence is provided about adolescent learning, teachers are considered pivotal to adolescents’ educational development. Reform measures need to be targeted at the inservice and preservice teacher levels. This quantitative study employs a 40-item, five part Likert scale survey to understand preservice teachers’ (n = 142) perceptions of their confidence to teach in a middle school at the conclusion of their tertiary education. The survey instrument was developed from the literature, with connections to the Queensland College of Teachers' professional standards. Results indicated that they perceived themselves as capable of creating a positive classroom environment with seven items greater than 80%, except with behaviour management (< 80% for two items), and they considered their pedagogical knowledge to be adequate (i.e., 7 out of 8 items > 84%). Items associated with implementing a middle school curriculum had varied responses (e.g., implementing literacy and numeracy were 74%, while implementing learning with real world connections was 91%). This information may assist coursework designers. For example, if a significant percentages of preservice teachers indicate that they believe they were not well prepared for assessment and reporting at the middle school level, then course designers can target these areas more effectively.
Resumo:
The need to address on-road motorcycle safety in Australia is important due to the disproportionately high percentage of riders and pillions killed and injured each year. One approach to preventing motorcycle-related injury is through training and education. However, motorcycle rider training lacks empirical support as an effective road safety countermeasure to reduce crash involvement. Previous reviews have highlighted that risk-taking is a contributing factor in many motorcycle crashes, rather than merely a lack of vehicle-control skills (Haworth & Mulvihill, 2005; Jonah, Dawson & Bragg, 1982; Watson et al, 1996). Hence, though the basic vehicle-handling skills and knowledge of road rules that are taught in most traditional motorcycle licence training programs may be seen as an essential condition of safe riding, they do not appear to be sufficient in terms of crash reduction. With this in mind there is considerable scope for the improvement of program focus and content for rider training and education. This program of research examined an existing traditional pre-licence motorcycle rider training program and formatively evaluated the addition of a new classroom-based module to address risky riding; the Three Steps to Safer Riding program. The pilot program was delivered in the real world context of the Q-Ride motorcycle licensing system in the state of Queensland, Australia. Three studies were conducted as part of the program of research: Study 1, a qualitative investigation of delivery practices and student learning needs in an existing rider training course; Study 2, an investigation of the extent to which an existing motorcycle rider training course addressed risky riding attitudes and motives; and Study 3, a formative evaluation of the new program. A literature review as well as the investigation of learning needs for motorcyclists in Study 1 aimed to inform the initial planning and development of the Three Steps to Safer Riding program. Findings from Study 1 suggested that the training delivery protocols used by the industry partner training organisation were consistent with a learner-centred approach and largely met the learning needs of trainee riders. However, it also found that information from the course needs to be reinforced by on-road experiences for some riders once licensed and that personal meaning for training information was not fully gained until some riding experience had been obtained. While this research informed the planning and development of the new program, a project team of academics and industry experts were responsible for the formulation of the final program. Study 2 and Study 3 were conducted for the purpose of formative evaluation and program refinement. Study 2 served primarily as a trial to test research protocols and data collection methods with the industry partner organisation and, importantly, also served to gather comparison data for the pilot program which was implemented with the same rider training organisation. Findings from Study 2 suggested that the existing training program of the partner organisation generally had a positive (albeit small) effect on safety in terms of influencing attitudes to risk taking, the propensity for thrill seeking, and intentions to engage in future risky riding. However, maintenance of these effects over time and the effects on riding behaviour remain unclear due to a low response rate upon follow-up 24 months after licensing. Study 3 was a formative evaluation of the new pilot program to establish program effects and possible areas for improvement. Study 3a examined the short term effects of the intervention pilot on psychosocial factors underpinning risky riding compared to the effects of the standard traditional training program (examined in Study 2). It showed that the course which included the Three Steps to Safer Riding program elicited significantly greater positive attitude change towards road safety than the existing standard licensing course. This effect was found immediately following training, and mean scores for attitudes towards safety were also maintained at the 12 month follow-up. The pilot program also had an immediate effect on other key variables such as risky riding intentions and the propensity for thrill seeking, although not significantly greater than the traditional standard training. A low response rate at the 12 month follow-up unfortunately prevented any firm conclusions being drawn regarding the impact of the pilot program on self-reported risky riding once licensed. Study 3a further showed that the use of intermediate outcomes such as self-reported attitudes and intentions for evaluation purposes provides insights into the mechanisms underpinning risky riding that can be changed by education and training. A multifaceted process evaluation conducted in Study 3b confirmed that the intervention pilot was largely delivered as designed, with course participants also rating most aspects of training delivery highly. The complete program of research contributed to the overall body of knowledge relating to motorcycle rider training, with some potential implications for policy in the area of motorcycle rider licensing. A key finding of the research was that psychosocial influences on risky riding can be shaped by structured education that focuses on awareness raising at a personal level and provides strategies to manage future riding situations. However, the formative evaluation was mainly designed to identify areas of improvement for the Three Steps to Safer Riding program and found several areas of potential refinement to improve future efficacy of the program. This included aspects of program content, program delivery, resource development, and measurement tools. The planned future follow-up of program participants' official crash and traffic offence records over time may lend further support for the application of the program within licensing systems. The findings reported in this thesis offer an initial indication that the Three Steps to Safer Riding is a useful resource to accompany skills-based training programs.
Resumo:
There is a growing trend to offer students learning opportunities that are flexible, innovative and engaging. As educators embrace student-centred agile teaching and learning methodologies, which require continuous reflection and adaptation, the need to evaluate students’ learning in a timely manner has become more pressing. Conventional evaluation surveys currently dominate the evaluation landscape internationally, despite recognition that they are insufficient to effectively evaluate curriculum and teaching quality. Surveys often: (1) fail to address the issues for which educators need feedback, (2) constrain student voice, (3) have low response rates and (4) occur too late to benefit current students. Consequently, this paper explores principles of effective feedback to propose a framework for learner-focused evaluation. We apply a three-stage control model, involving feedforward, concurrent and feedback evaluation, to investigate the intersection of assessment and evaluation in agile learning environments. We conclude that learner-focused evaluation cycles can be used to guide action so that evaluation is not undertaken simply for the benefit of future offerings, but rather to benefit current students by allowing ‘real-time’ learning activities to be adapted in the moment. As a result, students become co-producers of learning and evaluation becomes a meaningful, responsive dialogue between students and their instructors.
Resumo:
In 2012, Queensland University of Technology (QUT) committed to the massive project of revitalizing its Bachelor of Science (ST01) degree. Like most universities in Australia, QUT has begun work to align all courses by 2015 to the requirements of the updated Australian Qualifications Framework (AQF) which is regulated by the Tertiary Education Quality and Standards Agency (TEQSA). From the very start of the redesigned degree program, students approach scientific study with an exciting mix of theory and highly topical real world examples through their chosen “grand challenge.” These challenges, Fukushima and nuclear energy for example, are the lenses used to explore science and lead to 21st century learning outcomes for students. For the teaching and learning support staff, our grand challenge is to expose all science students to multidisciplinary content with a strong emphasis on embedding information literacies into the curriculum. With ST01, QUT is taking the initiative to rethink not only content but how units are delivered and even how we work together between the faculty, the library and learning and teaching support. This was the desired outcome but as we move from design to implementation, has this goal been achieved? A main component of the new degree is to ensure scaffolding of information literacy skills throughout the entirety of the three year course. However, with the strong focus on problem-based learning and group work skills, many issues arise both for students and lecturers. A move away from a traditional lecture style is necessary but impacts on academics’ workload and comfort levels. Therefore, academics in collaboration with librarians and other learning support staff must draw on each others’ expertise to work together to ensure pedagogy, assessments and targeted classroom activities are mapped within and between units. This partnership can counteract the tendency of isolated, unsupported academics to concentrate on day-to-day teaching at the expense of consistency between units and big picture objectives. Support staff may have a more holistic view of a course or degree than coordinators of individual units, making communication and truly collaborative planning even more critical. As well, due to staffing time pressures, design and delivery of new curriculum is generally done quickly with no option for the designers to stop and reflect on the experience and outcomes. It is vital we take this unique opportunity to closely examine what QUT has and hasn’t achieved to be able to recommend a better way forward. This presentation will discuss these important issues and stumbling blocks, to provide a set of best practice guidelines for QUT and other institutions. The aim is to help improve collaboration within the university, as well as to maximize students’ ability to put information literacy skills into action. As our students embark on their own grand challenges, we must challenge ourselves to honestly assess our own work.
Resumo:
Technological advances have led to an influx of affordable hardware that supports sensing, computation and communication. This hardware is increasingly deployed in public and private spaces, tracking and aggregating a wealth of real-time environmental data. Although these technologies are the focus of several research areas, there is a lack of research dealing with the problem of making these capabilities accessible to everyday users. This thesis represents a first step towards developing systems that will allow users to leverage the available infrastructure and create custom tailored solutions. It explores how this notion can be utilized in the context of energy monitoring to improve conventional approaches. The project adopted a user-centered design process to inform the development of a flexible system for real-time data stream composition and visualization. This system features an extensible architecture and defines a unified API for heterogeneous data streams. Rather than displaying the data in a predetermined fashion, it makes this information available as building blocks that can be combined and shared. It is based on the insight that individual users have diverse information needs and presentation preferences. Therefore, it allows users to compose rich information displays, incorporating personally relevant data from an extensive information ecosystem. The prototype was evaluated in an exploratory study to observe its natural use in a real-world setting, gathering empirical usage statistics and conducting semi-structured interviews. The results show that a high degree of customization does not warrant sustained usage. Other factors were identified, yielding recommendations for increasing the impact on energy consumption.
Resumo:
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
An Artificial Neural Network (ANN) is a computational modeling tool which has found extensive acceptance in many disciplines for modeling complex real world problems. An ANN can model problems through learning by example, rather than by fully understanding the detailed characteristics and physics of the system. In the present study, the accuracy and predictive power of an ANN was evaluated in predicting kinetic viscosity of biodiesels over a wide range of temperatures typically encountered in diesel engine operation. In this model, temperature and chemical composition of biodiesel were used as input variables. In order to obtain the necessary data for model development, the chemical composition and temperature dependent fuel properties of ten different types of biodiesels were measured experimentally using laboratory standard testing equipments following internationally recognized testing procedures. The Neural Networks Toolbox of MatLab R2012a software was used to train, validate and simulate the ANN model on a personal computer. The network architecture was optimised following a trial and error method to obtain the best prediction of the kinematic viscosity. The predictive performance of the model was determined by calculating the absolute fraction of variance (R2), root mean squared (RMS) and maximum average error percentage (MAEP) between predicted and experimental results. This study found that ANN is highly accurate in predicting the viscosity of biodiesel and demonstrates the ability of the ANN model to find a meaningful relationship between biodiesel chemical composition and fuel properties at different temperature levels. Therefore the model developed in this study can be a useful tool in accurately predict biodiesel fuel properties instead of undertaking costly and time consuming experimental tests.
Resumo:
Two key elements of education for sustainability (EfS) are action-competence, and the importance of place and experiencing the natural world. These elements emphasise and depend on the relationship between learners and their real world contexts, and have been incorporated to some extent into the sustainability cross-curricular perspective of the new Australian curriculum. Given the importance of real-world experiential learning in EfS, what is to be made of the use of multi-user virtual worlds in EfS? We went with our preservice secondary science teachers to the very appealing virtual world Quest Atlantis, which we are using in this paper as an example to explore the value of virtual worlds in EfS. In assessing the virtual world of Quest Atlantis against Australia’s Sustainability Curriculum Framework, many areas of coherence are evident relating to world viewing, systems thinking and futures thinking, knowledge of ecological and human systems, and implementing and reflecting on the consequences of actions. The power and appeal of these virtual experiences in developing these knowledges is undeniable. However there is some incoherence between the elements of EfS as expressed in the Sustainability Curriculum Framework and the experience of QA where learners are not acting in their real world, or developing connection with real place. This analysis highlights both the value and some limitations of virtual worlds as a venue for EfS.