983 resultados para curriculum models
Resumo:
Driving on motorways has largely been reduced to a lane-keeping task with cruise control. Rapidly, drivers are likely to get bored with such a task and take their attention away from the road. This is of concern in terms of road safety – particularly for professional drivers - since inattention has been identified as one of the main contributing factors to road crashes and is estimated to be involved in 20 to 30% of these crashes. Furthermore, drivers are not aware that their vigilance level has decreased and that their driving performance is impaired. Intelligent Transportation System (ITS) intervention can be used as a countermeasure against vigilance decrement. This paper aims to identify a variety of metrics impacted during monotonous driving - ranging from vehicle data to physiological variables - and relate them to two monotonous factors namely the monotony of the road design (straightness) and the monotony of the environment (landscape, signage, traffic). Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). The two monotonous factors are varied (high and low) leading to the use of four different driving scenarios (40 minutes each). We show with Generalised Linear Mixed Models that driver performance decreases faster when the road is monotonous. We also highlight that road monotony impairs a variety of driving performance and vigilance measures, ranging from speed, lateral position of the vehicle to physiological measurements such as heart rate variability, blink frequency and electrodermal activity. This study informs road designers of the importance of having a varied road environment. It also provides a range of metrics that can be used to detect in real-time the impairment of driving performance on monotonous roads. Such knowledge could result in the development of an in-vehicle device warning drivers at early signs of driving performance impairment on monotonous roads.
Resumo:
This article reframes comprehension as a social and intellectual practice. It reviews literature on current approaches to reading instruction for linguistically and culturally diverse and low socioeconomic students, noting the current policy emphasis on the teaching of comprehension as autonomous skills and ‘strategies’. The Four Resources model (Freebody & Luke, 1990) is used to situate comprehension instruction with an emphasis on student cultural and community knowledge, and substantive intellectual and sociocultural content in elementary and middle school curricula. Illustrations are drawn from research underway on the teaching of literacy in low socioeconomic schools.
Resumo:
This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.
Resumo:
Digital production and distribution technologies may create new opportunities for filmmaking in Australia. A culture of new approaches to filmmaking is emerging driven by ‘next generation filmmakers’ who are willing to consider new business models: from online web series to short films produced for mobile phones. At the same time cultural representation itself is transforming within an interactive, social media driven environment. Yet there is very little research into next generation filmmaking. The aim of this paper is to scope and discuss three key aspects of next generation filmmaking, namely: digital trends in film distribution and marketing; processes and strategies of ‘next generation’ filmmakers; and case studies of viable next generation business models and filmmaking practices. We conclude with a brief examination of the implications for media and cultural policy which suggests the future possibility of a rapprochement between creative industries discourse and cultural policy.
Resumo:
Curriculum evaluation, as a field of study, is dynamic. For over the years it has been responsive to the developments in the conceptualization of curriculum and the associated processes of curriculum change. The concept of curriculum is integral to curriculum evaluation and can be defined in terms of what can and shall be taught to whom, when, where, how, and why. Much of the decision making relates to what knowledge is to be selected for inclusion in the curriculum.
Resumo:
In this thesis, the issue of incorporating uncertainty for environmental modelling informed by imagery is explored by considering uncertainty in deterministic modelling, measurement uncertainty and uncertainty in image composition. Incorporating uncertainty in deterministic modelling is extended for use with imagery using the Bayesian melding approach. In the application presented, slope steepness is shown to be the main contributor to total uncertainty in the Revised Universal Soil Loss Equation. A spatial sampling procedure is also proposed to assist in implementing Bayesian melding given the increased data size with models informed by imagery. Measurement error models are another approach to incorporating uncertainty when data is informed by imagery. These models for measurement uncertainty, considered in a Bayesian conditional independence framework, are applied to ecological data generated from imagery. The models are shown to be appropriate and useful in certain situations. Measurement uncertainty is also considered in the context of change detection when two images are not co-registered. An approach for detecting change in two successive images is proposed that is not affected by registration. The procedure uses the Kolmogorov-Smirnov test on homogeneous segments of an image to detect change, with the homogeneous segments determined using a Bayesian mixture model of pixel values. Using the mixture model to segment an image also allows for uncertainty in the composition of an image. This thesis concludes by comparing several different Bayesian image segmentation approaches that allow for uncertainty regarding the allocation of pixels to different ground components. Each segmentation approach is applied to a data set of chlorophyll values and shown to have different benefits and drawbacks depending on the aims of the analysis.
Resumo:
Since 2002 QUT has sponsored a range of first year-focussed initiatives, most recently the Transitions In Project (TIP) which was designed to complement the First Year Experience Program and be a capacity building initiative. A primary focus of TIP was The First Year Curriculum Project: the review, development, implementation and evaluation of first year curriculum which has culminated in the development of a “Good Practice Guide” for the management of large first year units. First year curriculum initiates staff-student relationships and provides the scaffolding for the learning experience and engagement. Good practice in first year curriculum is within the control of the institution and can be redesigned and reviewed to improve outcomes. This session will provide a context for the First Year Curriculum Project and a concise overview of the suite of resources developed that have culminated in the Good Practice Guide.
Resumo:
Current healthcare models promote the equitable provision of palliative care to oncology patients with advancing disease, in the setting of their usual care, often in conjunction with anti-cancer therapies. This has resulted in specialist cancer services, as well as primary care across metropolitan, rural and remote communities, being called upon to integrate palliative care principles into their practice. To meet this increased demand for skilled health care professionals several national strategies have been initiated over the last five years. In this paper two projects are discussed in detail: the Palliative Care Curriculum for Undergraduates and the Program of Experience in the Professional Approach.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
Habitat models are widely used in ecology, however there are relatively few studies of rare species, primarily because of a paucity of survey records and lack of robust means of assessing accuracy of modelled spatial predictions. We investigated the potential of compiled ecological data in developing habitat models for Macadamia integrifolia, a vulnerable mid-stratum tree endemic to lowland subtropical rainforests of southeast Queensland, Australia. We compared performance of two binomial models—Classification and Regression Trees (CART) and Generalised Additive Models (GAM)—with Maximum Entropy (MAXENT) models developed from (i) presence records and available absence data and (ii) developed using presence records and background data. The GAM model was the best performer across the range of evaluation measures employed, however all models were assessed as potentially useful for informing in situ conservation of M. integrifolia, A significant loss in the amount of M. integrifolia habitat has occurred (p < 0.05), with only 37% of former habitat (pre-clearing) remaining in 2003. Remnant patches are significantly smaller, have larger edge-to-area ratios and are more isolated from each other compared to pre-clearing configurations (p < 0.05). Whilst the network of suitable habitat patches is still largely intact, there are numerous smaller patches that are more isolated in the contemporary landscape compared with their connectedness before clearing. These results suggest that in situ conservation of M. integrifolia may be best achieved through a landscape approach that considers the relative contribution of small remnant habitat fragments to the species as a whole, as facilitating connectivity among the entire network of habitat patches.
Resumo:
Our objective was to determine the factors that lead users to continue working with process modeling grammars after their initial adoption. We examined the explanatory power of three theoretical models of IT usage by applying them to two popular process modeling grammars. We found that a hybrid model of technology acceptance and expectation-confirmation best explained user intentions to continue using the grammars. We examined differences in the model results, and used them to provide three contributions. First, the study confirmed the applicability of IT usage models to the domain of process modeling. Second, we discovered that differences in continued usage intentions depended on the grammar type instead of the user characteristics. Third, we suggest implications and practice.
Resumo:
Across continents and cultures and periods of history, religious beliefs have underpinned curriculum in institutions of education. More recently, the so-called culture wars and terrorism have moved religion to center stage. In both state and independent education sectors, deep-seated assumptions about the nature of reality, spirituality, ethics and knowledge converge and clash in the curriculum documents of science, history, literacy education, and the like. With a focus on textual genres of power, starting with antiquity, this chapter argues that little has changed through millennia as the secular mysticism of price has replaced theology today in constraining the potentials of education.
Resumo:
Three particular geometrical shapes of parallelepiped, cylindrical and spheres were selected from potatoes (aspect ratio = 1:1, 2:1, 3:1), cut beans (length:diameter = 1:1, 2:1, 3:1) and peas respectively. The density variation of food particulates was studied in a batch fluidised bed dryer connected to a heat pump dehumidifier system. Apparent density and bulk density were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50 o C. Relative humidity of hot air was kept at 15% in all drying temperatures. Several empirical relationships were developed for the determination of changes in densities with the moisture content. Simple mathematical models were obtained to relate apparent density and bulk density with moisture content.
Resumo:
We have developed a new experimental method for interrogating statistical theories of music perception by implementing these theories as generative music algorithms. We call this method Generation in Context. This method differs from most experimental techniques in music perception in that it incorporates aesthetic judgments. Generation In Context is designed to measure percepts for which the musical context is suspected to play an important role. In particular the method is suitable for the study of perceptual parameters which are temporally dynamic. We outline a use of this approach to investigate David Temperley’s (2007) probabilistic melody model, and provide some provisional insights as to what is revealed about the model. We suggest that Temperley’s model could be improved by dynamically modulating the probability distributions according to the changing musical context.
Resumo:
Longitudinal data, where data are repeatedly observed or measured on a temporal basis of time or age provides the foundation of the analysis of processes which evolve over time, and these can be referred to as growth or trajectory models. One of the traditional ways of looking at growth models is to employ either linear or polynomial functional forms to model trajectory shape, and account for variation around an overall mean trend with the inclusion of random eects or individual variation on the functional shape parameters. The identification of distinct subgroups or sub-classes (latent classes) within these trajectory models which are not based on some pre-existing individual classification provides an important methodology with substantive implications. The identification of subgroups or classes has a wide application in the medical arena where responder/non-responder identification based on distinctly diering trajectories delivers further information for clinical processes. This thesis develops Bayesian statistical models and techniques for the identification of subgroups in the analysis of longitudinal data where the number of time intervals is limited. These models are then applied to a single case study which investigates the neuropsychological cognition for early stage breast cancer patients undergoing adjuvant chemotherapy treatment from the Cognition in Breast Cancer Study undertaken by the Wesley Research Institute of Brisbane, Queensland. Alternative formulations to the linear or polynomial approach are taken which use piecewise linear models with a single turning point, change-point or knot at a known time point and latent basis models for the non-linear trajectories found for the verbal memory domain of cognitive function before and after chemotherapy treatment. Hierarchical Bayesian random eects models are used as a starting point for the latent class modelling process and are extended with the incorporation of covariates in the trajectory profiles and as predictors of class membership. The Bayesian latent basis models enable the degree of recovery post-chemotherapy to be estimated for short and long-term followup occasions, and the distinct class trajectories assist in the identification of breast cancer patients who maybe at risk of long-term verbal memory impairment.