982 resultados para Simon, HeinrichSimon, HeinrichHeinrichSimon
Resumo:
Many methods exist at the moment for deformable face fitting. A drawback to nearly all these approaches is that they are (i) noisy in terms of landmark positions, and (ii) the noise is biased across frames (i.e. the misalignment is toward common directions across all frames). In this paper we propose a grouped $\mathcal{L}1$-norm anchored method for simultaneously aligning an ensemble of deformable face images stemming from the same subject, given noisy heterogeneous landmark estimates. Impressive alignment performance improvement and refinement is obtained using very weak initialization as "anchors".
Resumo:
Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.
Resumo:
Work integration social enterprises (WISE) seek to create employment and pathways to employment for those highly disadvantaged in the labour market. This chapter examines the effects of WISE on the wellbeing of immigrants and refugees experiencing multiple barriers to economic and social participation. Drawing on an evaluation of a programme that supports seven such enterprises in the Australian state of Victoria, the effects of involvement for individual participants and their communities are examined. The study finds that this social enterprise model affords unique local opportunities for economic and social participation for groups experiencing significant barriers to meaningful employment. These opportunities have a positive impact on individual and community-level wellbeing. However, the financial costs of the model are high relative to other employment programmes, which is consistent with international findings on intermediate labour market programmes. The productivity costs of WISE are also disproportionately high compared to private sector competitors in some industries. This raises considerable dilemmas for social enterprise operators seeking to produce social value and achieve business sustainability while bearing high productivity costs to fulfil their mission. Further, the evaluation illuminates an ongoing need to address the systemic and structural drivers of health and labour market inequalities that characterize socio-economic participation for immigrants and refugees.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Resumo:
Background Non-fatal health outcomes from diseases and injuries are a crucial consideration in the promotion and monitoring of individual and population health. The Global Burden of Disease (GBD) studies done in 1990 and 2000 have been the only studies to quantify non-fatal health outcomes across an exhaustive set of disorders at the global and regional level. Neither effort quantified uncertainty in prevalence or years lived with disability (YLDs). Methods Of the 291 diseases and injuries in the GBD cause list, 289 cause disability. For 1160 sequelae of the 289 diseases and injuries, we undertook a systematic analysis of prevalence, incidence, remission, duration, and excess mortality. Sources included published studies, case notification, population-based cancer registries, other disease registries, antenatal clinic serosurveillance, hospital discharge data, ambulatory care data, household surveys, other surveys, and cohort studies. For most sequelae, we used a Bayesian meta-regression method, DisMod-MR, designed to address key limitations in descriptive epidemiological data, including missing data, inconsistency, and large methodological variation between data sources. For some disorders, we used natural history models, geospatial models, back-calculation models (models calculating incidence from population mortality rates and case fatality), or registration completeness models (models adjusting for incomplete registration with health-system access and other covariates). Disability weights for 220 unique health states were used to capture the severity of health loss. YLDs by cause at age, sex, country, and year levels were adjusted for comorbidity with simulation methods. We included uncertainty estimates at all stages of the analysis. Findings Global prevalence for all ages combined in 2010 across the 1160 sequelae ranged from fewer than one case per 1 million people to 350 000 cases per 1 million people. Prevalence and severity of health loss were weakly correlated (correlation coefficient −0·37). In 2010, there were 777 million YLDs from all causes, up from 583 million in 1990. The main contributors to global YLDs were mental and behavioural disorders, musculoskeletal disorders, and diabetes or endocrine diseases. The leading specific causes of YLDs were much the same in 2010 as they were in 1990: low back pain, major depressive disorder, iron-deficiency anaemia, neck pain, chronic obstructive pulmonary disease, anxiety disorders, migraine, diabetes, and falls. Age-specific prevalence of YLDs increased with age in all regions and has decreased slightly from 1990 to 2010. Regional patterns of the leading causes of YLDs were more similar compared with years of life lost due to premature mortality. Neglected tropical diseases, HIV/AIDS, tuberculosis, malaria, and anaemia were important causes of YLDs in sub-Saharan Africa. Interpretation Rates of YLDs per 100 000 people have remained largely constant over time but rise steadily with age. Population growth and ageing have increased YLD numbers and crude rates over the past two decades. Prevalences of the most common causes of YLDs, such as mental and behavioural disorders and musculoskeletal disorders, have not decreased. Health systems will need to address the needs of the rising numbers of individuals with a range of disorders that largely cause disability but not mortality. Quantification of the burden of non-fatal health outcomes will be crucial to understand how well health systems are responding to these challenges. Effective and affordable strategies to deal with this rising burden are an urgent priority for health systems in most parts of the world. Funding Bill & Melinda Gates Foundation.
Resumo:
In this paper, we propose an approach which attempts to solve the problem of surveillance event detection, assuming that we know the definition of the events. To facilitate the discussion, we first define two concepts. The event of interest refers to the event that the user requests the system to detect; and the background activities are any other events in the video corpus. This is an unsolved problem due to many factors as listed below: 1) Occlusions and clustering: The surveillance scenes which are of significant interest at locations such as airports, railway stations, shopping centers are often crowded, where occlusions and clustering of people are frequently encountered. This significantly affects the feature extraction step, and for instance, trajectories generated by object tracking algorithms are usually not robust under such a situation. 2) The requirement for real time detection: The system should process the video fast enough in both of the feature extraction and the detection step to facilitate real time operation. 3) Massive size of the training data set: Suppose there is an event that lasts for 1 minute in a video with a frame rate of 25fps, the number of frames for this events is 60X25 = 1500. If we want to have a training data set with many positive instances of the event, the video is likely to be very large in size (i.e. hundreds of thousands of frames or more). How to handle such a large data set is a problem frequently encountered in this application. 4) Difficulty in separating the event of interest from background activities: The events of interest often co-exist with a set of background activities. Temporal groundtruth typically very ambiguous, as it does not distinguish the event of interest from a wide range of co-existing background activities. However, it is not practical to annotate the locations of the events in large amounts of video data. This problem becomes more serious in the detection of multi-agent interactions, since the location of these events can often not be constrained to within a bounding box. 5) Challenges in determining the temporal boundaries of the events: An event can occur at any arbitrary time with an arbitrary duration. The temporal segmentation of events is difficult and ambiguous, and also affected by other factors such as occlusions.
Resumo:
In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).
Resumo:
Driving on an approach to a signalized intersection while distracted is particularly dangerous, as potential vehicular conflicts and resulting angle collisions tend to be severe. Given the prevalence and importance of this particular scenario, the decisions and actions of distracted drivers during the onset of yellow lights are the focus of this study. Driving simulator data were obtained from a sample of 58 drivers under baseline and handheld mobile phone conditions at the University of Iowa - National Advanced Driving Simulator. Explanatory variables included age, gender, cell phone use, distance to stop-line, and speed. Although there is extensive research on drivers’ responses to yellow traffic signals, the examination has been conducted from a traditional regression-based approach, which does not necessary provide the underlying relations and patterns among the sampled data. In this paper, we exploit the benefits of both classical statistical inference and data mining techniques to identify the a priori relationships among main effects, non-linearities, and interaction effects. Results suggest that novice (16-17 years) and young drivers’ (18-25 years) have heightened yellow light running risk while distracted by a cell phone conversation. Driver experience captured by age has a multiplicative effect with distraction, making the combined effect of being inexperienced and distracted particularly risky. Overall, distracted drivers across most tested groups tend to reduce the propensity of yellow light running as the distance to stop line increases, exhibiting risk compensation on a critical driving situation.
Resumo:
This paper begins by providing an overview of bike share programs, followed by a critical examination of the growing body of literature on these programs. This synthesis of previous works, both peer-reviewed and grey, includes an identification of the current gaps in knowledge related to the impacts of bike sharing programs. This synthesis represents a critically needed evaluation of the current state of global bike share research, in order to better understand, and maximize the effectiveness of current and future programs. Several consistent themes have emerged within the growing body of research on bike share programs. Firstly, the importance bike share members place on convenience and value for money appears paramount in their motivation to sign up and use these programs. Secondly, and somewhat counter intuitively, scheme members are more likely to own and use private bicycles than non-members. Thirdly, users demonstrate a greater reluctance to wear helmets than private bicycle riders and helmets have acted as a deterrent in jurisdictions in which helmets are mandatory. Finally, and perhaps most importantly from a sustainable transport perspective, the majority of scheme users are substituting from sustainable modes of transport rather than the car.
Resumo:
Background: When experiencing sleep problems for the first time, consumers may often approach community pharmacists for advice as they are easily accessible health care professionals in the community. In Australian community pharmacies there are no specific tools available for use by pharmacists to assist with the assessment and handling of consumers with sleep enquiries. Objective: To assess the feasibility of improving the detection of sleep disorders within the community through the pilot of a newly developed Community Pharmacy Sleep Assessment Tool (COP-SAT). Method: The COP-SAT was designed to incorporate elements from a number of existing, standardized, and validated clinical screening measures. The COP-SAT was trialed in four Australian community pharmacies over a 4-week period. Key findings: A total of 241 community pharmacy consumers were assessed using the COP-SAT. A total of 74 (30.7%) were assessed as being at risk of insomnia, 26 (10.7%) were at risk of daytime sleepiness, 19 (7.9%) were at risk of obstructive sleep apnea, and 121 (50.2%) were regular snorers. A total of 116 (48.1%) participants indicated that they consume caffeine before bedtime, of which 55 (47%) had associated symptoms of sleep onset insomnia. Moreover, 85 (35%) consumed alcohol before bedtime, of which 50 (58%) experienced fragmented sleep, 50 (58%) were regular snorers, and nine (10.6%) had apnea symptoms. The COP-SAT was feasible in the community pharmacy setting. The prevalence of sleep disorders in the sampled population was high, but generally consistent with previous studies on the general population. Conclusion: A large proportion of participants reported sleep disorder symptoms, and a link was found between the consumption of alcohol and caffeine substances at bedtime and associated symptoms. While larger studies are needed to assess the clinical properties of the tool, the results of this feasibility study have demonstrated that the COP-SAT may be a practical tool for the identification of patients at risk of developing sleep disorders in the community.
Resumo:
An engaging narrative is maintained throughout this edited collection of articles that address the issue of militarism in international relations. The book seamlessly integrates historical and contemporary perspectives on militarism with theory and relevant international case studies, resulting in a very informative read. The work is comprised of three parts. Part 1 deals with the theorisation of militarism and includes chapters by Anna Stavrianakis and Jan Selby, Martin Shaw, Simon Dalby, and Nicola Short. It covers a range of topics relating to historical and contemporary theories of militarism, geopolitical threat construction, political economy, and the US military’s ‘cultural turn’.
Resumo:
The question has been asked "Can we shape an imaginative teacher identity?" (Garrod 2005). This research explores how participating in an overseas school experience can contribute to developing TESOL teachers' professional identities. The particular context for this research is the experiences of a group of Hong Kong pre-service TESOL teachers attending a short language immersion program at a university in Australia. As part of this program, the teachers are attached to a local school, where they observe and interact with the school community. This research tracks the teachers' perceptions of how this experience has contributed to their understandings of what it means to be a TESOL teacher and to their own developing sense of professional identity. It is suggested that engaging in critical reflection on this intercultural experience can develop teacher knowledge, practice and sense of self. Recent research (Swain, in press) posits that verbal protocols, such as 'stimulated recall' are part of the learning process rather than mere data collection techniques. How can this potential of 'stimulated recall' be realized as part of the learning processes in TESOL teacher education?