828 resultados para process theory
Resumo:
Technology and Nursing Practice explains and critically engages with the practice implications of technology for nursing. It takes a broad view of technology, covering not only health informatics, but also 'tele-nursing' and the use of equipment in clinical practice.
Resumo:
Objects move, collide, flow, bend, heat up, cool down, stretch, compress and boil. These and other things that cause changes in objects over time are intuitively characterized as processes. To understand common sense physical reasoning and make programs that interact with the physical world as well as people do we must understand qualitative reasoning about processes, when they will occur, their effects, and when they will stop. Qualitative Process theory defines a simple notion of physical process that appears useful as a language in which to write dynamical theories. Reasoning about processes also motivates a new qualitative representation for quantity in terms of inequalities, called quantity space. This report describes the basic concepts of Qualitative Process theory, several different kinds of reasoning that can be performed with them, and discusses its impact on other issues in common sense reasoning about the physical world, such as causal reasoning and measurement interpretation. Several extended examples illustrate the utility of the theory, including figuring out that a boiler can blow up, that an oscillator with friction will eventually stop, and how to say that you can pull with a string but not push with it. This report also describes GIZMO, an implemented computer program which uses Qualitative Process theory to make predictions and interpret simple measurements. The represnetations and algorithms used in GIZMO are described in detail, and illustrated using several examples.
Resumo:
In this thesis, I examined the relevance of dual-process theory to understanding forgiveness. Specifically, I argued that the internal conflict experienced by laypersons when forgiving (or finding themselves unable to forgive) and the discrepancies between existing definitions of forgiveness can currently be best understood through the lens of dual-process theory. Dual-process theory holds that individuals engage in two broad forms of mental processing corresponding to two systems, here referred to as System 1 and System 2. System 1 processing is automatic, unconscious, and operates through learned associations and heuristics. System 2 processing is effortful, conscious, and operates through rule-based and hypothetical thinking. Different definitions of forgiveness amongst both lay persons and scholars may reflect different processes within each system. Further, lay experiences with internal conflict concerning forgiveness may frequently result from processes within each system leading to different cognitive, affective, and behavioural responses. The study conducted for this thesis tested the hypotheses that processing within System 1 can directly affect one's likelihood to forgive, and that this effect is moderated by System 2 processing. I used subliminal conditioning to manipulate System 1 processing by creating positive or negative conditioned attitudes towards a hypothetical transgressor. I used working memory load (WML) to inhibit System 2 processing amongst half of the participants. The conditioning phase of the study failed and so no conclusions could be drawn regarding the roles of System 1 and System 2 in forgiveness. The implications of dual-process theory for forgiveness research and clinical practice, and directions for future research are discussed.
Resumo:
Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^
Resumo:
El incumplimiento reiterado de la normatividad y políticas relacionadas con los tiempos de respuesta del proceso de contratación minera del país, desarrollado actualmente por la recién creada Agencia Nacional de Minería ANM, ha suscitado que la administración del recurso minero no se realice bajo los principios de eficiencia, eficacia, economía y celeridad. Estas debilidades manifiestas provocan represamientos en la resolución de trámites, congelación de áreas para contratar, sobrecostos, demoras en los tiempos de respuesta establecidos por la normatividad vigente y trae como consecuencia incertidumbre en los inversionistas mineros y pérdidas por concepto de recaudo de canon superficiario, entre otras. El objetivo del presente trabajo de investigación consiste en analizar el proceso de titulación minera de Colombia a partir de la filosofía de mejora continua desarrollado en la teoría de restricciones TOC (Theory Of Constraints), para poder identificar cuáles son los cuellos de botella que no permiten que el proceso fluya de manera adecuada y proponer alternativas de mejora, que con su implementación exploten y subordinen la limitaciones al sistema.
Resumo:
This paper is intended both as a contribution to the conceptual work on process in economic thought and as an attempt to connect a non-institutionalist, non-evolutionary thinker to it. The paper has two principal objectives: (i) to delineate a broad, philosophically grounded conception of what an economic process theory (EPT) is; and (ii) to locate the contributions of George Shackle within this broad conception of EPT. In pursuing these two objectives, I hope to draw out the originality and significance of Shackle’s economics with a particular emphasis on what he adds to process conceptions developed within other heterodox traditions such as institutional and evolutionary economics. I will also highlight some of the perceived limitations of Shackle’s approach and link them to the limitations of process philosophy.
Resumo:
The two areas of theory upon which this research was based were „strategy development process?(SDP) and „complex adaptive systems? (CAS), as part of complexity theory, focused on human social organisations. The literature reviewed showed that there is a paucity of empirical work and theory in the overlap of the two areas, providing an opportunity for contributions to knowledge in each area of theory, and for practitioners. An inductive approach was adopted for this research, in an effort to discover new insights to the focus area of study. It was undertaken from within an interpretivist paradigm, and based on a novel conceptual framework. The organisationally intimate nature of the research topic, and the researcher?s circumstances required a research design that was both in-depth and long term. The result was a single, exploratory, case study, which included use of data from 44 in-depth, semi-structured interviews, from 36 people, involving all the top management team members and significant other staff members; observations, rumour and grapevine (ORG) data; and archive data, over a 5½ year period (2005 – 2010). Findings confirm the validity of the conceptual framework, and that complex adaptive systems theory has potential to extend strategy development process theory. It has shown how and why the strategy process developed in the case study organisation by providing deeper insights to the behaviour of the people, their backgrounds, and interactions. Broad predictions of the „latent strategy development? process and some elements of the strategy content are also possible. Based on this research, it is possible to extend the utility of the SDP model by including peoples? behavioural characteristics within the organisation, via complex adaptive systems theory. Further research is recommended to test limits of the application of the conceptual framework and improve its efficacy with more organisations across a variety of sectors.
Resumo:
2010 Mathematics Subject Classification: 60J85, 92D25.
Networks in the shadow of markets and hierarchies : calling the shots in the visual effects industry
Resumo:
The nature and organisation of creative industries and the creative economy has received increased attention in recent academic and policy literatures (Florida 2002; Grabher 2002; Scott 2006a). Constituted as one variant on new economy narratives, creativity, alongside knowledge, has been presented as a key competitive asset, Such industries – ranging from advertising, to film and new media – are seen as not merely expanding their scale and scope, but as leading edge proponents of a more general trend towards new forms of organization and economic coordination (Davis and Scase 2000). The idea of network forms (and the consequent displacement of markets and hierarchies) has been at the heart of attempts to differentiate the field economically and spatially. Across both the discussion of production models and work/employment relations is the assertion of the enhanced importance of trust and non-market relations in coordinating structures and practices. This reflects an influential view in sociological, management, geography and other literatures that social life is ‘intrinsically networked’ (Sunley 2008: 12) and that we can confidently use the term ‘network society’ to describe contemporary structures and practices (Castells 1996). Our paper is sceptical of the conceptual and empirical foundations of such arguments. We draw on a number of theoretical resources, including institutional theory, global value chain analysis and labour process theory (see Smith and McKinlay 2009) to explore how a more realistic and grounded analysis of the nature of and limits to networks can be articulated. Given space constraints, we cannot address all the dimensions of network arguments or evidence. Our focus is on inter and intra-firm relations and draws on research into a particular creative industry – visual effects – that is a relatively new though increasingly important global production network. Through this examination a different model of the creative industries and creative work emerges – one in which market rules and patterns of hierarchical interaction structure the behaviour of economic actors and remain a central focus of analysis. The next section outlines and unpacks in more detail arguments concerning the role and significance of networks, markets and hierarchies in production models and work organisation in creative industries and the ‘creative economy’.
Resumo:
Purpose: The purpose of this paper is to provide a labour process theory interpretation of four case studies within the Australian construction industry. In each case study a working time intervention (a shift to a five-day working week from the industry standard six days) was implemented as an attempt to improve the work-life balance of employees. ----- ----- Design/methodology/approach: This paper was based on four case studies with mixed methods. Each case study has a variety of data collection methods which include questionnaires, short and long interviews, and focus groups. ----- ----- Findings: It was found that the complex mix of wage- and salary-earning staff within the construction industry, along with labour market pressures, means that changing to a five-day working week is quite a radical notion within the industry. However, there are some organisations willing to explore opportunities for change with mixed experiences. ----- ----- Practical implications: The practical implications of this research include understanding the complexity within the Australian construction industry, based around hours of work and pay systems. Decision-makers within the construction industry must recognize a range of competing pressures that mean that “preferred” managerial styles might not be appropriate. ----- ----- Originality/value:– This paper shows that construction firms must take an active approach to reducing the culture of long working hours. This can only be achieved by addressing issues of project timelines and budgets and assuring that take-home pay is not reliant on long hours of overtime.
Resumo:
Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.