162 resultados para Information processing
Resumo:
This study explores the effects of use-simulated and peripheral placements in video games on attitude to the brand. Results indicate that placements do not lead to enhanced brand attitude, even when controlling for involvement and skill. It appears this is due to constraints on brand information processing in a game context.
Resumo:
Many researchers have investigated and modelled aspects of Web searching. A number of studies have explored the relationships between individual differences and Web searching. However, limited studies have explored the role of users’ cognitive styles in determining Web searching behaviour. Current models of Web searching have limited consideration of users’ cognitive styles. The impact of users’ cognitive style on Web searching and their relationships are little understood or represented. Individuals differ in their information processing approaches and the way they represent information, thus affecting their performance. To create better models of Web searching we need to understand more about user’s cognitive style and their Web search behaviour, and the relationship between them. More rigorous research is needed in using more complex and meaningful measures of relevance; across a range of different types of search tasks and different populations of Internet users. The project further explores the relationships between the users’ cognitive style and their Web searching. The project will develop a model depicting the relationships between a user’s cognitive style and their Web searching. The related literature, aims and objectives and research design are discussed.
Resumo:
Business processes have emerged as a well-respected variable in the design of successful corporations. However, unlike other key managerial variables, such as products and services, customers and employees, physical or digital assets, the conceptualization and management of business processes are in many respects in their infancy. In this book, Jan Recker investigates the notion of quality of business process modeling grammars. His evaluation is based on an ontological-, qualitative-, and quantitative analysis, applied to BPMN, a widely-used business process modeling grammar. His results reveal the ontological shortcomings of BPMN and how these manifest themselves in actual process modeling practice, as well as how they influence the usage behavior of modeling practitioners. More generally, his book constitutes a landmark for empirical technology assessment, analyzing the way in which design flaws in technology influence usage behavior.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
We consider the problem of object tracking in a wireless multimedia sensor network (we mainly focus on the camera component in this work). The vast majority of current object tracking techniques, either centralised or distributed, assume unlimited energy, meaning these techniques don't translate well when applied within the constraints of low-power distributed systems. In this paper we develop and analyse a highly-scalable, distributed strategy to object tracking in wireless camera networks with limited resources. In the proposed system, cameras transmit descriptions of objects to a subset of neighbours, determined using a predictive forwarding strategy. The received descriptions are then matched at the next camera on the objects path using a probability maximisation process with locally generated descriptions. We show, via simulation, that our predictive forwarding and probabilistic matching strategy can significantly reduce the number of object-misses, ID-switches and ID-losses; it can also reduce the number of required transmissions over a simple broadcast scenario by up to 67%. We show that our system performs well under realistic assumptions about matching objects appearance using colour.
Resumo:
Decentralised sensor networks typically consist of multiple processing nodes supporting one or more sensors. These nodes are interconnected via wireless communication. Practical applications of Decentralised Data Fusion have generally been restricted to using Gaussian based approaches such as the Kalman or Information Filter This paper proposes the use of Parzen window estimates as an alternate representation to perform Decentralised Data Fusion. It is required that the common information between two nodes be removed from any received estimates before local data fusion may occur Otherwise, estimates may become overconfident due to data incest. A closed form approximation to the division of two estimates is described to enable conservative assimilation of incoming information to a node in a decentralised data fusion network. A simple example of tracking a moving particle with Parzen density estimates is shown to demonstrate how this algorithm allows conservative assimilation of network information.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
The study investigated the effect on learning of four different instructional formats used to teach assembly procedures. Cognitive load and spatial information processing theories were used to generate the instructional material. The first group received a physical model to study, the second an isometric drawing, the third an isometric drawing plus a model and the fourth an orthographic drawing. Forty secondary school students were presented with the four different instructional formats and subsequently tested on an assembly task. The findings indicated that there may be evidence to argue that the model format which only required encoding of an already constructed three dimensional representation, caused less extraneous cognitive load compared to the isometric and the orthographic formats. No significant difference was found between the model and the isometric-plus-model formats on all measures because 80% of the students in the isometric-plus-model format chose to use the model format only. The model format also did not differ significantly from other groups in total time taken to complete the assembly, in number of correctly assembled pieces and in time spent on studying the tasks. However, the model group had significantly more correctly completed models and required fewer extra looks than the other groups.
Resumo:
The need to develop effective and efficient training programs has been recognised by all sectors engaged in training. In responding to the above need, focus has been directed to developing good competency statements and performance indicators to measure the outcomes. Very little has been done to understand how the competency statements get translated into good performance. To conceptualise this translation process, a representational model based on an information processing paradigm is proposed and discussed. It is argued that learners’ prior knowledge and the effectiveness of the instructional material are two variables that have significant bearing on how effectively the competency knowledge is translated into outcomes. To contextualise the model examples from apprentice training are used.
Resumo:
The combination of alcohol and driving is a major health and economic burden to most communities in industrialised countries. The total cost of crashes for Australia in 1996 was estimated at approximately 15 billion dollars and the costs for fatal crashes were about 3 billion dollars (BTE, 2000). According to the Bureau of Infrastructure, Transport and Regional Development and Local Government (2009; BITRDLG) the overall cost of road fatality crashes for 2006 $3.87 billion, with a single fatal crash costing an estimated $2.67 million. A major contributing factor to crashes involving serious injury is alcohol intoxication while driving. It is a well documented fact that consumption of liquor impairs judgment of speed, distance and increases involvement in higher risk behaviours (Waller, Hansen, Stutts, & Popkin, 1986a; Waller et al., 1986b). Waller et al. (1986a; b) asserts that liquor impairs psychomotor function and therefore renders the driver impaired in a crisis situation. This impairment includes; vision (degraded), information processing (slowed), steering, and performing two tasks at once in congested traffic (Moskowitz & Burns, 1990). As BAC levels increase the risk of crashing and fatality increase exponentially (Department of Transport and Main Roads, 2009; DTMR). According to Compton et al. (2002) as cited in the Department of Transport and Main Roads (2009), crash risk based on probability, is five times higher when the BAC is 0.10 compared to a BAC of 0.00. The type of injury patterns sustained also tends to be more severe when liquor is involved, especially with injuries to the brain (Waller et al., 1986b). Single and Rohl (1997) reported that 30% of all fatal crashes in Australia where alcohol involvement was known were associated with Breadth Analysis Content (BAC) above the legal limit of 0.05gms/100ml. Alcohol related crashes therefore contributes to a third of the total cost of fatal crashes (i.e. $1 billion annually) and crashes where alcohol is involved are more likely to result in death or serious injury (ARRB Transport Research, 1999). It is a major concern that a drug capable of impairment such as is the most available and popular drug in Australia (Australian Institute of Health and Welfare, 2007; AIHW). According to the AIHW (2007) 89.9% of the approximately 25,000 Australians over the age of 14 surveyed had consumed at some point in time, and 82.9% had consumed liquor in the previous year. This study found that 12.1% of individuals admitted to driving a motor vehicle whilst intoxicated. In general males consumed more liquor in all age groups. In Queensland there were 21503 road crashes in 2001, involving 324 fatalities and the largest contributing factor was alcohol and or drugs (Road Traffic Report, 2001). 23438 road crashes in 2004, involving 289 fatalities and the largest contributing factor was alcohol and or drugs (DTMR, 2009). Although a number of measures such as random breath testing have been effective in reducing the road toll (Watson, Fraine & Mitchell, 1995) the recidivist drink driver remains a serious problem. These findings were later supported with research by Leal, King, and Lewis (2006). This Queensland study found that of the 24661 drink drivers intercepted in 2004, 3679 (14.9%) were recidivists with multiple drink driving convictions in the previous three years covered (Leal et al., 2006). The legal definition of the term “recidivist” is consistent with the Transport Operations (Road Use Management) Act (1995) and is assigned to individuals who have been charged with multiple drink driving offences in the previous five years. In Australia relatively little attention has been given to prevention programs that target high-risk repeat drink drivers. However, over the last ten years a rehabilitation program specifically designed to reduce recidivism among repeat drink drivers has been operating in Queensland. The program, formally known as the “Under the Limit” drink driving rehabilitation program (UTL) was designed and implemented by the research team at the Centre for Accident Research and Road Safety in Queensland with funding from the Federal Office of Road Safety and the Institute of Criminology (see Sheehan, Schonfeld & Davey, 1995). By 2009 over 8500 drink-drivering offenders had been referred to the program (Australian Institute of Crime, 2009).
Resumo:
Anxiety disorders have been viewed as manifestations of broad underlying predisposing personality constructs such as neuroticism combined with more specific individual differences of unhelpful information processing styles. Given the high prevalence of anxiety and the significant impairment that it causes, there is an important need to continue to explore successful treatments for this disorder. Research indicates that there is still room for significantly improving attrition rates and treatment adherence. Traditionally Motivational Interviewing (MI) has been used to facilitate health behaviour change. Recently MI has been applied to psychotherapy and has been shown to improve the outcome of CBT. However, these studies have been limited to only considering pre- and post-treatment measures and neglected to consider when changes occur along the course of therapy. This leaves the unanswered question of what is the impact of pre-treatment MI on the treatment trajectory of therapy. This study provides preliminary research into answering this question by tracking changes on a weekly basis along the course of group CBT. Prior to group CBT, 40 individuals with a principal anxiety disorder diagnosis were randomly assigned to receive either 3 individual sessions of MI or placed on a waitlist control group. All participants then received the same dosage of 10 weekly 2 hour sessions of group CBT. Tracking treatment outcome trajectory over the course of CBT, the pre-treatment MI group, compared to the control group, experienced a greater improvement early on in the course of therapy in their symptom distress, interpersonal relationships and quality of life. This early advantage over the control group was then maintained throughout therapy. These results not only demonstrate the value of adding MI to CBT, but also highlight the immediacy of MI effects. Further research is needed to determine the robustness of these effects to inform clinical implications of how to best apply MI to improve treatment adherence to CBT for anxiety disorders.
Resumo:
We investigate known security flaws in the context of security ceremonies to gain an understanding of the ceremony analysis process. The term security ceremonies is used to describe a system of protocols and humans which interact for a specific purpose. Security ceremonies and ceremony analysis is an area of research in its infancy, and we explore the basic principles involved to better understand the issues involved.We analyse three ceremonies, HTTPS, EMV and Opera Mini, and use the information gained from the experience to establish a list of typical flaws in ceremonies. Finally, we use that list to analyse a protocol proven secure for human use. This leads to a realisation of the strengths and weaknesses of ceremony analysis.
Resumo:
To detect and annotate the key events of live sports videos, we need to tackle the semantic gaps of audio-visual information. Previous work has successfully extracted semantic from the time-stamped web match reports, which are synchronized with the video contents. However, web and social media articles with no time-stamps have not been fully leveraged, despite they are increasingly used to complement the coverage of major sporting tournaments. This paper aims to address this limitation using a novel multimodal summarization framework that is based on sentiment analysis and players' popularity. It uses audiovisual contents, web articles, blogs, and commentators' speech to automatically annotate and visualize the key events and key players in a sports tournament coverage. The experimental results demonstrate that the automatically generated video summaries are aligned with the events identified from the official website match reports.
Resumo:
Human facial expression is a complex process characterized of dynamic, subtle and regional emotional features. State-of-the-art approaches on facial expression recognition (FER) have not fully utilized this kind of features to improve the recognition performance. This paper proposes an approach to overcome this limitation using patch-based ‘salient’ Gabor features. A set of 3D patches are extracted to represent the subtle and regional features, and then inputted into patch matching operations for capturing the dynamic features. Experimental results show a significant performance improvement of the proposed approach due to the use of the dynamic features. Performance comparison with pervious work also confirms that the proposed approach achieves the highest CRR reported to date on the JAFFE database and a top-level performance on the Cohn-Kanade (CK) database.
Resumo:
The present article, which is abstracted from a larger study into the acquisition and exercise of nephrology nursing expertise, aims to explore the concept of recognition of expertise. The study used grounded theory methodology and involved 17 registered nurses who were practising in a metropolitan renal unit in New South Wales, Australia. Concurrent data collection and analysis was undertaken, incorporating participant observations and interviews. According to nurses in this study, patients, doctors and other nurses recognized that some nurses were experts while others were not. In addition, being trusted, being a role model and teaching others were important components of being recognized as an expert nephrology nurse. Of importance for nursing, the results of the present study indicate that knowledge and experience are not sufficient to ensure expert practice; recognition of expertise by others is an important function of expertise acquisition.