710 resultados para Traditional school science
Resumo:
Motor vehicle crashes are a leading cause of death among young people. Fourteen percent of adolescents aged 13-14 report passenger-related injuries within three months. Intervention programs typically focus on young drivers and overlook passengers as potential protective influences. Graduated Driver Licensing restricts passenger numbers, and this study focuses on a complementary school-based intervention to increase passengers’ personal- and peer-protective behavior. The aim of this research was to assess the impact of the curriculum-based injury prevention program, Skills for Preventing Injury in Youth (SPIY), on passenger-related risk-taking and injuries, and intentions to intervene in friends’ risky road behavior. SPIY was implemented in Grade 8 Health classes and evaluated using survey and focus group data from 843 students across 10 Australian secondary schools. Intervention students reported less passenger-related risk-taking six months following the program. Their intention to protect friends from underage driving also increased. The results of this study show that a comprehensive, school-based program targeting individual and social changes can increase adolescent passenger safety.
Resumo:
The globalized nature of modern society has generated a number of pressures that impact internationally on countries’ policies and practices of science education. Among these pressures are key issues of health and environment confronting global science, global economic control through multinational capitalism, comparative and competitive international testing of student science achievement, and the desire for more humane and secure international society. These are not all one-way pressures and there is evidence of both more conformity in the intentions and practices of science education and of a greater appreciation of how cultural differences, and the needs of students as future citizens can be met. Hence while a case for economic and competitive subservience of science education can be made, the evidence for such narrowing is countered by new initiatives that seek to broaden its vision and practices. The research community of science education has certainly widened internationally and this generates many healthy exchanges, although cultural styles of education other than Western ones are still insufficiently recognized. The dominance of English language within these research exchanges is, however, causing as many problems as it solves. Science education, like education as a whole, is a strongly cultural phenomenon, and this provides a healthy and robust buffer to the more negative effects of globalization
Resumo:
In a commercial environment, it is advantageous to know how long it takes customers to move between different regions, how long they spend in each region, and where they are likely to go as they move from one location to another. Presently, these measures can only be determined manually, or through the use of hardware tags (i.e. RFID). Soft biometrics are characteristics that can be used to describe, but not uniquely identify an individual. They include traits such as height, weight, gender, hair, skin and clothing colour. Unlike traditional biometrics, soft biometrics can be acquired by surveillance cameras at range without any user cooperation. While these traits cannot provide robust authentication, they can be used to provide identification at long range, and aid in object tracking and detection in disjoint camera networks. In this chapter we propose using colour, height and luggage soft biometrics to determine operational statistics relating to how people move through a space. A novel average soft biometric is used to locate people who look distinct, and these people are then detected at various locations within a disjoint camera network to gradually obtain operational statistics
Resumo:
This paper investigates the effects of limited speech data in the context of speaker verification using a probabilistic linear discriminant analysis (PLDA) approach. Being able to reduce the length of required speech data is important to the development of automatic speaker verification system in real world applications. When sufficient speech is available, previous research has shown that heavy-tailed PLDA (HTPLDA) modeling of speakers in the i-vector space provides state-of-the-art performance, however, the robustness of HTPLDA to the limited speech resources in development, enrolment and verification is an important issue that has not yet been investigated. In this paper, we analyze the speaker verification performance with regards to the duration of utterances used for both speaker evaluation (enrolment and verification) and score normalization and PLDA modeling during development. Two different approaches to total-variability representation are analyzed within the PLDA approach to show improved performance in short-utterance mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development. The results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset suggest that the HTPLDA system can continue to achieve better performance than Gaussian PLDA (GPLDA) as evaluation utterance lengths are decreased. We also highlight the importance of matching durations for score normalization and PLDA modeling to the expected evaluation conditions. Finally, we found that a pooled total-variability approach to PLDA modeling can achieve better performance than the traditional concatenated total-variability approach for short utterances in mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development.
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.
Resumo:
The Commonwealth Department of Industry, Science and Resources is identifying best practice case study examples of supply chain management within the building and construction industry to illustrate the concepts, innovations and initiatives that are at work. The projects provide individual enterprises with examples of how to improve their performance, and the competitiveness of the industry as a whole.
Resumo:
Four morphologically cryptic species of the Bactrocera dorsalis fruit fly complex (B. dorsalis s.s., B. papayae, B. carambolae and B. philippinensis) are serious agricultural pests. As they are difficult to diagnose using traditional taxonomic techniques, we examined the potential for geometric morphometric analysis of wing size and shape to discriminate between them. Fifteen wing landmarks generated size and shape data for 245 specimens for subsequent comparisons among three geographically distinct samples of each species. Intraspecific wing size was significantly different within samples of B. carambolae and B. dorsalis s.s. but not within samples of B. papayae or B. philippinensis. Although B. papayae had the smallest wings (average centroid size=6.002 mm±0.061 SE) and B. dorsalis s.s. the largest (6.349 mm±0.066 SE), interspecific wing size comparisons were generally non-informative and incapable of discriminating species. Contrary to the wing size data, canonical variate analysis based on wing shape data discriminated all species with a relatively high degree of accuracy; individuals were correctly reassigned to their respective species on average 93.27% of the time. A single sample group of B. carambolae from locality 'TN Malaysia' was the only sample to be considerably different from its conspecific groups with regards to both wing size and wing shape. This sample was subsequently deemed to have been originally misidentified and likely represents an undescribed species. We demonstrate that geometric morphometric techniques analysing wing shape represent a promising approach for discriminating between morphologically cryptic taxa of the B. dorsalis species complex.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100% precision with recall rates of up to 60%.
Resumo:
Key establishment is a crucial primitive for building secure channels in a multi-party setting. Without quantum mechanics, key establishment can only be done under the assumption that some computational problem is hard. Since digital communication can be easily eavesdropped and recorded, it is important to consider the secrecy of information anticipating future algorithmic and computational discoveries which could break the secrecy of past keys, violating the secrecy of the confidential channel. Quantum key distribution (QKD) can be used generate secret keys that are secure against any future algorithmic or computational improvements. QKD protocols still require authentication of classical communication, although existing security proofs of QKD typically assume idealized authentication. It is generally considered folklore that QKD when used with computationally secure authentication is still secure against an unbounded adversary, provided the adversary did not break the authentication during the run of the protocol. We describe a security model for quantum key distribution extending classical authenticated key exchange (AKE) security models. Using our model, we characterize the long-term security of the BB84 QKD protocol with computationally secure authentication against an eventually unbounded adversary. By basing our model on traditional AKE models, we can more readily compare the relative merits of various forms of QKD and existing classical AKE protocols. This comparison illustrates in which types of adversarial environments different quantum and classical key agreement protocols can be secure.
Resumo:
School reform is a matter of both redistributive social justice and recognitive social justice. Following Fraser (Justice interruptus: critical reflections on the “postsocialist” condition. Routledge, New York, 1997), we begin from a philosophical and political commitment to the more equitable redistribution of knowledge, credentials, competence, and capacity to children of low socioeconomic, cultural, and linguistic minority and Indigenous communities whose access, achievement, and participation historically have “lagged” behind system norms and benchmarks set by middle class and dominant culture communities. At the same time, we argue that the recognition of these students and their communities’ lifeworlds, knowledges, and experiences in the curriculum, in classroom teaching, and learning is both a means and an end: a means toward improved achievement measured conventionally and a goal for reform and alteration of mainstream curriculum knowledge and what is made to count in the school as valued cultural knowledge and practice. The work that we report here was based on an ongoing 4-year project where a team of university teacher educators/researchers have partnered with school leadership and staff to build relationships within community. The purpose has been to study whether and how engagement with new digital arts and multimodal literacies could have effects on students “conventional” print literacy achievement and, secondly, to study whether and how the overall performance of a school could be generated through a focus on professional conversations and partnerships in curriculum and instruction – rather than the top-down implementation of a predetermined pedagogical scheme, package, or approach.
Resumo:
Design Science Research (DSR) has emerged as an important approach in Information Systems (IS) research. However, DSR is still in its genesis and has yet to achieve consensus on even the fundamentals, such as what methodology / approach to use for DSR. While there has been much effort to establish DSR methodologies, a complete, holistic and validated approach for the conduct of DSR to guide IS researcher (especially novice researchers) is yet to be established. Alturki et al. (2011) present a DSR ‘Roadmap’, making the claim that it is a complete and comprehensive guide for conducting DSR. This paper aims to further assess this Roadmap, by positioning it against the ‘Idealized Model for Theory Development’ (IM4TD) (Fischer & Gregor 2011). The IM4TD highlights the role of discovery and justification and forms of reasoning to progress in theory development. Fischer and Gregor (2011) have applied IM4TD’s hypothetico-deductive method to analyze DSR methodologies, which is adopted in this study to deductively validate the Alturki et al. (2011) Roadmap. The results suggest that the Roadmap adheres to the IM4TD, is reasonably complete, overcomes most shortcomings identified in other DSR methodologies and also highlights valuable refinements that should be considered within the IM4TD.
Resumo:
Experts’ views and commentary have been highly respected in every discipline. However, unlike traditional disciplines like medicine, mathematics and engineering, Information System (IS) expertise is difficult to define. This paper attempts to understand the characteristics of IS-expert through a comprehensive literature review of analogous disciplines and then derives a formative research model with three main constructs. Further, this research validates the formative model to identify the characteristics of expertise using data gathered from 220 respondents using a contemporary Information System. Finally this research demonstrates how individuals with different levels of expertise differ in their views in relation to system evaluations.