205 resultados para statistical reports
Resumo:
In this paper, we used a nonconservative Lagrangian mechanics approach to formulate a new statistical algorithm for fluid registration of 3-D brain images. This algorithm is named SAFIRA, acronym for statistically-assisted fluid image registration algorithm. A nonstatistical version of this algorithm was implemented, where the deformation was regularized by penalizing deviations from a zero rate of strain. In, the terms regularizing the deformation included the covariance of the deformation matrices Σ and the vector fields (q). Here, we used a Lagrangian framework to reformulate this algorithm, showing that the regularizing terms essentially allow nonconservative work to occur during the flow. Given 3-D brain images from a group of subjects, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the nonstatistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the nonconservative terms, creating four versions of SAFIRA. We evaluated and compared our algorithms' performance on 92 3-D brain scans from healthy monozygotic and dizygotic twins; 2-D validations are also shown for corpus callosum shapes delineated at midline in the same subjects. After preliminary tests to demonstrate each method, we compared their detection power using tensor-based morphometry (TBM), a technique to analyze local volumetric differences in brain structure. We compared the accuracy of each algorithm variant using various statistical metrics derived from the images and deformation fields. All these tests were also run with a traditional fluid method, which has been quite widely used in TBM studies. The versions incorporating vector-based empirical statistics on brain variation were consistently more accurate than their counterparts, when used for automated volumetric quantification in new brain images. This suggests the advantages of this approach for large-scale neuroimaging studies.
Resumo:
Contemporary models of spoken word production assume conceptual feature sharing determines the speed with which objects are named in categorically-related contexts. However, statistical models of concept representation have also identified a role for feature distinctiveness, i.e., features that identify a single concept and serve to distinguish it quickly from other similar concepts. In three experiments we investigated whether distinctive features might explain reports of counter-intuitive semantic facilitation effects in the picture word interference (PWI) paradigm. In Experiment 1, categorically-related distractors matched in terms of semantic similarity ratings (e.g., zebra and pony) and manipulated with respect to feature distinctiveness (e.g., a zebra has stripes unlike other equine species) elicited interference effects of comparable magnitude. Experiments 2 and 3 investigated the role of feature distinctiveness with respect to reports of facilitated naming with part-whole distractor-target relations (e.g., a hump is a distinguishing part of a CAMEL, whereas knee is not, vs. an unrelated part such as plug). Related part distractors did not influence target picture naming latencies significantly when the part denoted by the related distractor was not visible in the target picture (whether distinctive or not; Experiment 2). When the part denoted by the related distractor was visible in the target picture, non-distinctive part distractors slowed target naming significantly at SOA of -150 ms (Experiment 3). Thus, our results show that semantic interference does occur for part-whole distractor-target relations in PWI, but only when distractors denote features shared with the target and other category exemplars. We discuss the implications of these results for some recently developed, novel accounts of lexical access in spoken word production.
Resumo:
This paper reports preliminary findings of a survey of in-service teachers in WA and SA conducted in 2012. Participants completed an online survey open to all teachers in WA and SA. The survey ran for three months from April to June 2012. One section of the survey asked teachers to report their perceptions of the impact that NAPLAN has had on the curriculum and pedagogy of their classroom and school. Two principal research questions were addressed in this preliminary analysis. First what are teacher perceptions of the effects on NAPLAN on curriculum and pedagogy? Second, are there any interaction effects between gender, socioeconomics status, location and school system on teachers perceptions? Statistical analyses examined one- and two-way MANOVA to assess main effects and interaction effects on teachers' global perceptions. These were followed by a series of exploratory one- and two-way ANOVA of specific survey items to suggest potential sources for differences among teachers from different socioeconomic regions, states and systems. Teachers report that they are either choosing or being instructed to teach to the test, that this results in less time being spent on other curriculum areas and that these effects contribute in a negative way on the engagement of students. This largely agrees with a body of international research that suggests that high-stakes literacy and numeracy tests often results in unintended consequences such as a narrow curriculum focus (Au, 2007), a return to teacher-centred instruction (Barret, 2009) and a decrease in motivation (Ryan & Wesinstein, 2009). Preliminary results from early survey respondents suggests there is a relationship between participant responses to the effect of NAPLAN on curriculum and pedagogy based on the characteristics of which State the teacher taught in, their perceptions of the socioeconomic status of the school and the school system in which they were employed (State, Catholic, and Independent).
Resumo:
In an ever-changing and globalised world there is a need for higher education to adapt and evolve its models of learning and teaching. The old industrial model has lost traction, and new patterns of creative engagement are required. These new models potentially increase relevancy and better equip students for the future. Although creativity is recognised as an attribute that can contribute much to the development of these pedagogies, and creativity is valued by universities as a graduate capability, some educators understandably struggle to translate this vision into practice. This paper reports on selected survey findings from a mixed methods research project which aimed to shed light on how creativity can be designed for in higher education learning and teaching settings. A social constructivist epistemology underpinned the research and data was gathered using survey and case study methods. Descriptive statistical methods and informed grounded theory were employed for the analysis reported here. The findings confirm that creativity is valued for its contribution to the development of students’ academic work, employment opportunities and life in general; however, tensions arise between individual educator’s creative pedagogical goals and the provision of institutional support for implementation of those objectives. Designing for creativity becomes, paradoxically, a matter of navigating and limiting complexity and uncertainty, while simultaneously designing for those same states or qualities.
Resumo:
Pedestrian safety is a critical issue in Ethiopia. Reports show that 50 to 60% of traffic fatality victims in the country are pedestrians. The primary aim of this research was to examine the possible causes of and contributing factors to crashes with pedestrians in Ethiopia, and improve pedestrian safety by recommending possible countermeasures. The secondary aim was to develop appropriate pedestrian crash models for two-way two-lane rural roads and roundabouts in the capital city of Ethiopia. This research uses quantitative methods throughout the process of the investigation. The research has applied various statistical methods. The results of this research support the idea that geometric and operational features have significant influence on pedestrian safety and crashes. Accordingly, policies and strategies are needed to safeguard pedestrians in Ethiopia.
Resumo:
This chapter addresses opportunities for problem posing in developing young children’s statistical literacy, with a focus on student-directed investigations. Although the notion of problem posing has broadened in recent years, there nevertheless remains limited research on how problem posing can be integrated within the regular mathematics curriculum, especially in the areas of statistics and probability. The chapter first reviews briefly aspects of problem posing that have featured in the literature over the years. Consideration is next given to the importance of developing children’s statistical literacy in which problem posing is an inherent feature. Some findings from a school playground investigation conducted in four, fourth-grade classes illustrate the different ways in which children posed investigative questions, how they made predictions about their outcomes and compared these with their findings, and the ways in which they chose to represent their findings.
Resumo:
As statistical education becomes more firmly embedded in the school curriculum and its value across the curriculum is recognised, attention moves from knowing procedures, such as calculating a mean or drawing a graph, to understanding the purpose of a statistical investigation in decision making in many disciplines. As students learn to complete the stages of an investigation, the question of meaningful assessment of the process arises. This paper considers models for carrying out a statistical inquiry and, based on a four-phase model, creates a developmental squence that can be used for the assessment of outcomes from each of the four phases as well as for the complete inquiry. The developmental sequence is based on the SOLO model, focussing on the "observed" outcomes during the inquiry process.
Resumo:
While historically linked with psychoanalysis, countertransference is recognised as an important component of the experience of therapists, regardless of the therapeutic modality. This study considers the implications of this for the training of psychologists. Fifty-five clinical psychology trainees from four university training programmes completed an anonymous questionnaire that collected written reports of countertransference experiences, ratings of confidence in managing these responses, and supervision in this regard. The reports were analysed using a process of thematic analysis. Several themes emerged including a desire to protect or rescue clients, feeling criticised or controlled by clients, feeling helpless, and feeling disengaged. Trainees varied in their reports of awareness of countertransference and the regularity of supervision in this regard. The majority reported a lack of confidence in managing their responses, and all reported interest in learning about countertransference. The implications for reflective practice in postgraduate psychology training are discussed.
Resumo:
This article examines a social media assignment used to teach and practice statistical literacy with over 400 students each semester in large-lecture traditional, fully online, and flipped sections of an introductory-level statistics course. Following the social media assignment, students completed a survey on how they approached the assignment. Drawing from the authors’ experiences with the project and the survey results, this article offers recommendations for developing social media assignments in large courses that focus on the interplay between the social media tool and the implications of assignment prompts.
Resumo:
The export of sediments from coastal catchments can have detrimental impacts on estuaries and near shore reef ecosystems such as the Great Barrier Reef. Catchment management approaches aimed at reducing sediment loads require monitoring to evaluate their effectiveness in reducing loads over time. However, load estimation is not a trivial task due to the complex behaviour of constituents in natural streams, the variability of water flows and often a limited amount of data. Regression is commonly used for load estimation and provides a fundamental tool for trend estimation by standardising the other time specific covariates such as flow. This study investigates whether load estimates and resultant power to detect trends can be enhanced by (i) modelling the error structure so that temporal correlation can be better quantified, (ii) making use of predictive variables, and (iii) by identifying an efficient and feasible sampling strategy that may be used to reduce sampling error. To achieve this, we propose a new regression model that includes an innovative compounding errors model structure and uses two additional predictive variables (average discounted flow and turbidity). By combining this modelling approach with a new, regularly optimised, sampling strategy, which adds uniformity to the event sampling strategy, the predictive power was increased to 90%. Using the enhanced regression model proposed here, it was possible to detect a trend of 20% over 20 years. This result is in stark contrast to previous conclusions presented in the literature. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
In this paper, we tackle the problem of unsupervised domain adaptation for classification. In the unsupervised scenario where no labeled samples from the target domain are provided, a popular approach consists in transforming the data such that the source and target distributions be- come similar. To compare the two distributions, existing approaches make use of the Maximum Mean Discrepancy (MMD). However, this does not exploit the fact that prob- ability distributions lie on a Riemannian manifold. Here, we propose to make better use of the structure of this man- ifold and rely on the distance on the manifold to compare the source and target distributions. In this framework, we introduce a sample selection method and a subspace-based method for unsupervised domain adaptation, and show that both these manifold-based techniques outperform the cor- responding approaches based on the MMD. Furthermore, we show that our subspace-based approach yields state-of- the-art results on a standard object recognition benchmark.
Resumo:
Responding to mixed evidence on the decision-usefulness of annual report disclosures for derivative financial instruments to capital market participants, and concerns identified by practice, this paper examines usefulness in a direct study of user perceptions. Interviews with analysts from Australia’s four major banks reveal essential usefulness, limited by the disclosures’ failure to reflect companies’ actual use of derivatives throughout the period, and inability of users to understand companies’ off-balance sheet risk and risk management practices from information considered generic and boilerplate. The research complements and extends existing archival and survey research and provides new evidence suggesting low-cost ways for increasing usefulness. It supports the International Accounting Standards Board’s disclosure recommendations in its recent Discussion Paper: A Review of the Conceptual Framework for Financial Reporting, but, at the same time, highlights that for these proposed measures to be successful in relation to IFRS 7, they may need to address other issues. The research increases knowledge of the informational requirements of lenders, an important class of financial information user, and supports calls from practice for companies to improve their disclosure of material economic risks.
Sleep-related crash characteristics: Implications for applying a fatigue definition to crash reports
Resumo:
Sleep-related (SR) crashes are an endemic problem the world over. However, police officers report difficulties in identifying sleepiness as a crash contributing factor. One approach to improving the sensitivity of SR crash identification is by applying a proxy definition post hoc to crash reports. To identify the prominent characteristics of SR crashes and highlight the influence of proxy definitions, ten years of Queensland (Australia) police reports of crashes occurring in ≥100 km/h speed zones were analysed. In Queensland, two approaches are routinely taken to identifying SR crashes. First, attending police officers identify crash causal factors; one possible option is ‘fatigue/fell asleep’. Second, a proxy definition is applied to all crash reports. Those meeting the definition are considered SR and added to the police-reported SR crashes. Of the 65,204 vehicle operators involved in crashes 3449 were police-reported as SR. Analyses of these data found that male drivers aged 16–24 years within the first two years of unsupervised driving were most likely to have a SR crash. Collision with a stationary object was more likely in SR than in not-SR crashes. Using the proxy definition 9739 (14.9%) crashes were classified as SR. Using the proxy definition removes the findings that SR crashes are more likely to involve males and be of high severity. Additionally, proxy defined SR crashes are no less likely at intersections than not-SR crashes. When interpreting crash data it is important to understand the implications of SR identification because strategies aimed at reducing the road toll are informed by such data. Without the correct interpretation, funding could be misdirected. Improving sleepiness identification should be a priority in terms of both improvement to police and proxy reporting.