177 resultados para social information processing, aggression, victimization, violence
Resumo:
This abstract is a preliminary discussion of the importance of blending of Indigenous cultural knowledges with mainstream knowledges of mathematics for supporting Indigenous young people. This import is emphasised in the documents Preparing the Ground for Partnership (Priest, 2005), The Indigenous Education Strategic Directions 2008–2011 (Department of Education, Training and the Arts, 2007) and the National Goals for Indigenous Education (Department of Education, Employment and Work Relations, 2008). These documents highlight the contextualising of literacy and numeracy to students’ community and culture (see Priest, 2005). Here, Community describes “a culture that is oriented primarily towards the needs of the group. Martin Nakata (2007) describes contextualising to culture as about that which already exists, that is, Torres Strait Islander community, cultural context and home languages (Nakata, 2007, p. 2). Continuing, Ezeife (2002) cites Hollins (1996) in stating that Indigenous people belong to “high-context culture groups” (p. 185). That is, “high-context cultures are characterized by a holistic (top-down) approach to information processing in which meaning is “extracted” from the environment and the situation. Low-context cultures use a linear, sequential building block (bottom-up) approach to information processing in which meaning is constructed” (p.185). In this regard, students who use holistic thought processing are more likely to be disadvantaged in mainstream mathematics classrooms. This is because Westernised mathematics is presented as broken into parts with limited connections made between concepts and with the students’ culture. It potentially conflicts with how they learn. If this is to change the curriculum needs to be made more culture-sensitive and community orientated so that students know and understand what they are learning and for what purposes.
Resumo:
This paper describes a novel framework for facial expression recognition from still images by selecting, optimizing and fusing ‘salient’ Gabor feature layers to recognize six universal facial expressions using the K nearest neighbor classifier. The recognition comparisons with all layer approach using JAFFE and Cohn-Kanade (CK) databases confirm that using ‘salient’ Gabor feature layers with optimized sizes can achieve better recognition performance and dramatically reduce computational time. Moreover, comparisons with the state of the art performances demonstrate the effectiveness of our approach.
Resumo:
In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.
Resumo:
Cognitive-energetical theories of information processing were used to generate predictions regarding the relationship between workload and fatigue within and across consecutive days of work. Repeated measures were taken on board a naval vessel during a non-routine and a routine patrol. Data were analyzed using growth curve modeling. Fatigue demonstrated a non-monotonic relationship within days in both patrols – fatigue was high at midnight, started decreasing until noontime and then increased again. Fatigue increased across days towards the end of the non-routine patrol, but remained stable across days in the routine patrol. The relationship between workload and fatigue changed over consecutive days in the non-routine patrol. At the beginning of the patrol, low workload was associated with fatigue. At the end of the patrol, high workload was associated with fatigue. This relationship could not be tested in the routine patrol, however it demonstrated a non-monotonic relationship between workload and fatigue – low and high workloads were associated with the highest fatigue. These results suggest that the optimal level of workload can change over time and thus have implications for the management of fatigue.
Resumo:
This talk proceeds from the premise that IR should engage in a more substantial dialogue with cognitive science. After all, how users decide relevance, or how they chose terms to modify a query are processes rooted in human cognition. Recently, there has been a growing literature applying quantum theory (QT) to model cognitive phenomena. This talk will survey recent research, in particular, modelling interference effects in human decision making. One aspect of QT will be illustrated - how quantum entanglement can be used to model word associations in human memory. The implications of this will be briefly discussed in terms of a new approach for modelling concept combinations. Tentative links to human adductive reasoning will also be drawn. The basic theme behind this talk is QT can potentially provide a new genre of information processing models (including search) more aligned with human cognition.
Resumo:
In this paper, we define and present a comprehensive classification of user intent for Web searching. The classification consists of three hierarchical levels of informational, navigational, and transactional intent. After deriving attributes of each, we then developed a software application that automatically classified queries using a Web search engine log of over a million and a half queries submitted by several hundred thousand users. Our findings show that more than 80% of Web queries are informational in nature, with about 10% each being navigational and transactional. In order to validate the accuracy of our algorithm, we manually coded 400 queries and compared the results from this manual classification to the results determined by the automated method. This comparison showed that the automatic classification has an accuracy of 74%. Of the remaining 25% of the queries, the user intent is vague or multi-faceted, pointing to the need for probabilistic classification. We discuss how search engines can use knowledge of user intent to provide more targeted and relevant results in Web searching.
Resumo:
Introduction The purpose of this study was to develop, implement and evaluate the impact of an educational intervention, comprising an innovative model of clinical decisionmaking and educational delivery strategy for facilitating nursing students‘ learning and development of competence in paediatric physical assessment practices. Background of the study Nursing students have an undergraduate education that aims to produce graduates of a generalist nature who demonstrate entry level competence for providing nursing care in a variety of health settings. Consistent with population morbidity and health care roles, paediatric nursing concepts typically form a comparatively small part of undergraduate curricula and students‘ exposure to paediatric physical assessment concepts and principles are brief. However, the nursing shortage has changed traditional nursing employment patterns and new graduates form the majority of the recruitment pool for paediatric nursing speciality staff. Paediatric nursing is a popular career choice for graduates and anecdotal evidence suggests that nursing students who select a clinical placement in their final year intend to seek employment in paediatrics upon graduation. Although concepts of paediatric nursing are included within undergraduate curriculum, students‘ ability to develop the required habits of mind to practice in what is still regarded as a speciality area of practice is somewhat limited. One of the areas of practice where this particularly impacts is in paediatric nursing physical assessment. Physical assessment is a fundamental component of nursing practice and competence in this area of practice is central to nursing students‘ development of clinical capability for practice as a registered nurse. Timely recognition of physiologic deterioration of patients is a key outcome of nurses‘ competent use of physical assessment strategies, regardless of the practice context. In paediatric nursing contexts children‘s physical assessment practices must specifically accommodate the child‘s different physiological composition, function and pattern of clinical deterioration (Hockenberry & Barrera, 2007). Thus, to effectively manage physical assessment of patients within the paediatric practice setting nursing students need to integrate paediatric nursing theory into their practice. This requires significant information processing and it is in this process where students are frequently challenged. The provision of rules or models can guide practice and assist novice-level nurses to develop their capabilities (Benner, 1984; Benner, Hooper-Kyriakidis & Stannard, 1999). Nursing practice models are cognitive tools that represent simplified patterns of expert analysis employing concepts that suit the limited reasoning of the inexperienced, and can represent the =rules‘ referred to by Benner (1984). Without a practice model of physical assessment students are likely to be uncertain about how to proceed with data collection, the interpretation of paediatric clinical findings and the appraisal of findings. These circumstances can result in ad hoc and unreliable nursing physical assessment that forms a poor basis for nursing decisions. The educational intervention developed as part of this study sought to resolve this problem and support nursing students‘ development of competence in paediatric physical assessment. Methods This study utilised the Context Input Process Product (CIPP) Model by Stufflebeam (2004) as the theoretical framework that underpinned the research design and evaluation methodology. Each of the four elements in the CIPP model were utilised to guide discrete stages of this study. The Context element informed design of the clinical decision-making process, the Paediatric Nursing Physical Assessment model. The Input element was utilised in appraising relevant literature, identifying an appropriate instructional methodology to facilitate learning and educational intervention delivery to undergraduate nursing students, and development of program content (the CD-ROM kit). Study One employed the Process element and used expert panel approaches to review and refine instructional methods, identifying potential barriers to obtaining an effective evaluation outcome. The Product element guided design and implementation of Study Two, which was conducted in two phases. Phase One employed a quasiexperimental between-subjects methodology to evaluate the impact of the educational intervention on nursing students‘ clinical performance and selfappraisal of practices in paediatric physical assessment. Phase Two employed a thematic analysis and explored the experiences and perspectives of a sample subgroup of nursing students who used the PNPA CD-ROM kit as preparation for paediatric clinical placement. Results Results from the Process review in Study One indicated that the prototype CDROM kit containing the PNPA model met the predetermined benchmarks for face validity and the impact evaluation instrumentation had adequate content validity in comparison with predetermined benchmarks. In the first phase of Study Two the educational intervention did not result in statistically significant differences in measures of student performance or self-appraisal of practice. However, in Phase Two qualitative commentary from students, and from the expert panel who reviewed the prototype CD-ROM kit (Study One, Phase One), strongly endorsed the quality of the intervention and its potential for supporting learning. This raises questions regarding transfer of learning and it is likely that, within this study, several factors have influenced students‘ transfer of learning from the educational intervention to the clinical practice environment, where outcomes were measured. Conclusion In summary, the educational intervention employed in this study provides insights into the potential e-learning approaches offer for delivering authentic learning experiences to undergraduate nursing students. Findings in this study raise important questions regarding possible pedagogical influences on learning outcomes, issues within the transfer of theory to practice and factors that may have influenced findings within the context of this study. This study makes a unique contribution to nursing education, specifically with respect to progressing an understanding of the challenges faced in employing instructive methods to impact upon nursing students‘ development of competence. The important contribution transfer of learning processes make to students‘ transition into the professional practice context and to their development of competence within the context of speciality practice is also highlighted. This study contributes to a greater awareness of the complexity of translating theoretical learning at undergraduate level into clinical practice, particularly within speciality contexts.
Resumo:
Several brain imaging studies have assumed that response conflict is present in Stroop tasks. However, this has not been demonstrated directly. We examined the time-course of stimulus and response conflict resolution in a numerical Stroop task by combining single-trial electro-myography (EMG) and event-related brain potentials (ERP). EMG enabled the direct tracking of response conflict and the peak latency of the P300 ERP wave was used to index stimulus conflict. In correctly responded trials of the incongruent condition EMG detected robust incorrect response hand activation which appeared consistently in single trials. In 50–80% of the trials correct and incorrect response hand activation coincided temporally, while in 20–50% of the trials incorrect hand activation preceded correct hand activation. EMG data provides robust direct evidence for response conflict. However, congruency effects also appeared in the peak latency of the P300 wave which suggests that stimulus conflict also played a role in the Stroop paradigm. Findings are explained by the continuous flow model of information processing: Partially processed task-irrelevant stimulus information can result in stimulus conflict and can prepare incorrect response activity. A robust congruency effect appeared in the amplitude of incongruent vs. congruent ERPs between 330–400 ms, this effect may be related to the activity of the anterior cingulate cortex.
Resumo:
ERP systems generally implement controls to prevent certain common kinds of fraud. In addition however, there is an imperative need for detection of more sophisticated patterns of fraudulent activity as evidenced by the legal requirement for company audits and the common incidence of fraud. This paper describes the design and implementation of a framework for detecting patterns of fraudulent activity in ERP systems. We include the description of six fraud scenarios and the process of specifying and detecting the occurrence of those scenarios in ERP user log data using the prototype software which we have developed. The test results for detecting these scenarios in log data have been verified and confirm the success of our approach which can be generalized to ERP systems in general.
Resumo:
This paper presents an overview of our demonstration of a low-bandwidth, wireless camera network where image compression is undertaken at each node. We briefly introduce the Fleck hardware platform we have developed as well as describe the image compression algorithm which runs on individual nodes. The demo will show real-time image data coming back to base as individual camera nodes are added to the network. Copyright 2007 ACM.
Resumo:
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.
Resumo:
We describe the design, development and learnings from the first phase of a rainforest ecological sensor network at Springbrook - part of a World Heritage precinct in South East Queensland. This first phase is part of a major initiative to develop the capability to provide reliable, long-term monitoring of rainforest ecosystems. We focus in particular on our analysis around energy and communication challenges which need to be solved to allow for reliable, long-term deployments in these types of environments.
Resumo:
This paper investigates a mobile, wireless sensor/actuator network application for use in the cattle breeding industry. Our goal is to prevent fighting between bulls in on-farm breeding paddocks by autonomously applying appropriate stimuli when one bull approaches another bull. This is an important application because fighting between high-value animals such as bulls during breeding seasons causes significant financial loss to producers. Furthermore, there are significant challenges in this type of application because it requires dynamic animal state estimation, real-time actuation and efficient mobile wireless transmissions. We designed and implemented an animal state estimation algorithm based on a state-machine mechanism for each animal. Autonomous actuation is performed based on the estimated states of an animal relative to other animals. A simple, yet effective, wireless communication model has been proposed and implemented to achieve high delivery rates in mobile environments. We evaluated the performance of our design by both simulations and field experiments, which demonstrated the effectiveness of our autonomous animal control system.
Resumo:
Agriculture accounts for a significant portion of the GDP in most developed countries. However, managing farms, particularly largescale extensive farming systems, is hindered by lack of data and increasing shortage of labour. We have deployed a large heterogeneous sensor network on a working farm to explore sensor network applications that can address some of the issues identified above. Our network is solar powered and has been running for over 6 months. The current deployment consists of over 40 moisture sensors that provide soil moisture profiles at varying depths, weight sensors to compute the amount of food and water consumed by animals, electronic tag readers, up to 40 sensors that can be used to track animal movement (consisting of GPS, compass and accelerometers), and 20 sensor/actuators that can be used to apply different stimuli (audio, vibration and mild electric shock) to the animal. The static part of the network is designed for 24/7 operation and is linked to the Internet via a dedicated high-gain radio link, also solar powered. The initial goals of the deployment are to provide a testbed for sensor network research in programmability and data handling while also being a vital tool for scientists to study animal behavior. Our longer term aim is to create a management system that completely transforms the way farms are managed.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.