932 resultados para distributed cognition theory
Resumo:
Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.
Resumo:
Background To describe the iterative development process and final version of ‘MobileMums’: a physical activity intervention for women with young children (<5 years) delivered primarily via mobile telephone (mHealth) short messaging service (SMS). Methods MobileMums development followed the five steps outlined in the mHealth development and evaluation framework: 1) conceptualization (critique of literature and theory); 2) formative research (focus groups, n= 48); 3) pre-testing (qualitative pilot of intervention components, n= 12); 4) pilot testing (pilot RCT, n= 88); and, 5) qualitative evaluation of the refined intervention (n= 6). Results Key findings identified throughout the development process that shaped the MobileMums program were the need for: behaviour change techniques to be grounded in Social Cognitive Theory; tailored SMS content; two-way SMS interaction; rapport between SMS sender and recipient; an automated software platform to generate and send SMS; and, flexibility in location of a face-to-face delivered component. Conclusions The final version of MobileMums is flexible and adaptive to individual participant’s physical activity goals, expectations and environment. MobileMums is being evaluated in a community-based randomised controlled efficacy trial (ACTRN12611000481976).
Resumo:
Given the paradigm of smart grid as the promising backbone for future network, this paper uses this paradigm to propose a new coordination approach for LV network based on distributed control algorithm. This approach divides the LV network into hierarchical communities where each community is controlled by a control agent. Different level of communication has been proposed for this structure to control the network in different operation modes.
Resumo:
What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.
Resumo:
In order to drive sustainable financial profitability, service firms make significant investments in creating service environments that consumers will prefer over the environments of their competitors. To date, servicescape research is over-focused on understanding consumers’ emotional and physiological responses to servicescape attributes, rather than taking a holistic view of how consumers cognitively interpret servicescapes. This thesis argues that consumers will cognitively ascribe symbolic meanings to servicescapes and then evaluate if those meanings are congruent with their sense of Self in order to form a preference for a servicescape. Consequently, this thesis takes a Self Theory approach to servicescape symbolism to address the following broad research question: How do ascribed symbolic meanings influence servicescape preference? Using a three-study, mixed-method approach, this thesis investigates the symbolic meanings consumers ascribe to servicescapes and empirically tests whether the joint effects of congruence between consumer Self and the symbolic meanings ascribed to servicescapes influence consumers’ servicescape preference. First, Study One identifies the symbolic meanings ascribed to salient servicescape attributes using a combination of repertory tests and laddering techniques within 19 semi-structured individual depth interviews. Study Two modifies an existing scale to create a symbolic servicescape meaning scale in order to measure the symbolic meanings ascribed to servicescapes. Finally, Study Three utilises the Self-Congruity Model to empirically examine the joint effects of consumer Self and servicescape on consumers’ preference for servicescapes. Using polynomial regression with response surface analysis, 14 joint effect models demonstrate that both Self-Servicescape incongruity and congruity influence consumers’ preference for servicescapes. Combined, the findings of three studies suggest that the symbolic meanings ascribed to servicescapes and their (in)congruities with consumers’ sense of self can be used to predict consumers’ preferences for servicescapes. These findings have several key theoretical and practical contributions to services marketing.
Resumo:
Grounded theory, first developed by Glaser and Strauss in the 1960s, was introduced into nursing education as a distinct research methodology in the 1970s. The theory is grounded in a critique of the dominant contemporary approach to social inquiry, which imposed "enduring" theoretical propositions onto study data. Rather than starting from a set theoretical framework, grounded theory relies on researchers distinguishing meaningful constructs from generated data and then identifying an appropriate theory. Grounded theory is thus particularly useful in investigating complex issues and behaviours not previously addressed and concepts and relationships in particular populations or places that are still undeveloped or weakly connected. Grounded theory data analysis processes include open, axial and selective coding levels. The purpose of this article was to explore the grounded theory research process and provide an initial understanding of this methodology.
Resumo:
The preparedness theory of classical conditioning proposed by Seligman (1970, 1971) has been applied extensively over the past 40 years to explain the nature and "source" of human fear and phobias. In this review we examine the formative studies that tested the four defining characteristics of prepared learning with animal fear-relevant stimuli (typically snakes and spiders) and consider claims that fear of social stimuli, such as angry faces, or faces of racial out-group members, may also be acquired utilising the same preferential learning mechanism. Exposition of critical differences between fear learning to animal and social stimuli suggests that a single account cannot adequately explain fear learning with animal and social stimuli. We demonstrate that fear conditioned to social stimuli is less robust than fear conditioned to animal stimuli as it is susceptible to cognitive influence and propose that it may instead reflect on negative stereotypes and social norms. Thus, a theoretical model that can accommodate the influence of both biological and cultural factors is likely to have broader utility in the explanation of fear and avoidance responses than accounts based on a single mechanism.
Resumo:
Power system restoration after a large area outage involves many factors, and the procedure is usually very complicated. A decision-making support system could then be developed so as to find the optimal black-start strategy. In order to evaluate candidate black-start strategies, some indices, usually both qualitative and quantitative, are employed. However, it may not be possible to directly synthesize these indices, and different extents of interactions may exist among these indices. In the existing black-start decision-making methods, qualitative and quantitative indices cannot be well synthesized, and the interactions among different indices are not taken into account. The vague set, an extended version of the well-developed fuzzy set, could be employed to deal with decision-making problems with interacting attributes. Given this background, the vague set is first employed in this work to represent the indices for facilitating the comparisons among them. Then, a concept of the vague-valued fuzzy measure is presented, and on that basis a mathematical model for black-start decision-making developed. Compared with the existing methods, the proposed method could deal with the interactions among indices and more reasonably represent the fuzzy information. Finally, an actual power system is served for demonstrating the basic features of the developed model and method.
Resumo:
Background Failure to convey time-critical information to team members during surgery diminishes members’ perception of the dynamic information relevant to their task, and compromises shared situational awareness. This research reports the dialog around clinical decisions made by team members in the time-pressured and high-risk context of surgery, and the impact of these communications on shared situational awareness. Methods Fieldwork methods were used to capture the dynamic integration of individual and situational elements in surgery that provided the backdrop for clinical decisions. Nineteen semi structured interviews were performed with 24 participants from anaesthesia, surgery, and nursing in the operating rooms of a large metropolitan hospital in Queensland, Australia. Thematic analysis was used. Results: The domain “coordinating decisions in surgery” was generated from textual data. Within this domain, three themes illustrated the dialog of clinical decisions, i.e., synchronizing and strategizing actions, sharing local knowledge, and planning contingency decisions based on priority. Conclusion Strategies used to convey decisions that enhanced shared situational awareness included the use of “self-talk”, closed-loop communications, and “overhearing” conversations that occurred at the operating table. Behaviours’ that compromised a team’s shared situational awareness included tunnelling and fixating on one aspect of the situation.
Resumo:
Lankes and Silverstein (2006) introduced the “participatory library” and suggested that the nature and form of the library should be explored. In the last several years, some attempts have been made in order to develop contemporary library models that are often known as Library 2.0. However, little research has been based on empirical data and such models have had a strong focus on technical aspects but less focus on participation. The research presented in this paper fills this gap. A grounded theory approach was adopted for this study. Six librarians were involved in in-depth individual interviews. As a preliminary result, five main factors of the participatory library emerged including technological, human, educational, social-economic, and environmental. Five factors influencing the participation in libraries were also identified: finance, technology, education, awareness, and policy. The study’s findings provide a fresh perspective on contemporary library and create a basis for further studies on this area.
Resumo:
In this paper we investigate the distribution of the product of Rayleigh distributed random variables. Considering the Mellin-Barnes inversion formula and using the saddle point approach we obtain an upper bound for the product distribution. The accuracy of this tail-approximation increases as the number of random variables in the product increase.
Resumo:
Whole-body computer control interfaces present new opportunities to engage children with games for learning. Stomp is a suite of educational games that use such a technology, allowing young children to use their whole body to interact with a digital environment projected on the floor. To maximise the effectiveness of this technology, tenets of self-determination theory (SDT) are applied to the design of Stomp experiences. By meeting user needs for competence, autonomy, and relatedness our aim is to increase children's engagement with the Stomp learning platform. Analysis of Stomp's design suggests that these tenets are met. Observations from a case study of Stomp being used by young children show that they were highly engaged and motivated by Stomp. This analysis demonstrates that continued application of SDT to Stomp will further enhance user engagement. It also is suggested that SDT, when applied more widely to other whole-body multi-user interfaces, could instil similar positive effects.
Resumo:
The well-established under-frequency load shedding (UFLS) is deemed to be the last of effective remedial measures against a severe frequency decline of a power system. With the ever-increasing size of power systems and the extensive penetration of distributed generators (DGs) in power systems, the problem of developing an optimal UFLS strategy is facing some new challenges. Given this background, an optimal UFLS strategy for a distribution system with DGs and load static characteristics taken into consideration is developed. Based on the frequency and the rate of change of frequency, the presented strategy consists of several basic rounds and a special round. In the basic round, the frequency emergency can be alleviated by quickly shedding some loads. In the special round, the frequency security can be maintained, and the operating parameters of the distribution system can be optimized by adjusting the output powers of DGs and some loads. The modified IEEE 37-node test feeder is employed to demonstrate the essential features of the developed optimal UFLS strategy in the MATLAB/SIMULINK environment.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focused recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the "best" choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy.