895 resultados para Deontic logic
Resumo:
One of the oldest problems in philosophy concerns the relationship between free will and moral responsibility. If we adopt the position that we lack free will, in the absolute sense—as have most philosophers who have addressed this issue—how can we truly be held accountable for what we do? This paper will contend that the most significant and interesting challenge to the long-standing status-quo on the matter comes not from philosophy, jurisprudence, or even physics, but rather from psychology. By examining this debate through the lens of contemporary behaviour disorders, such as ADHD, it will be argued that notions of free will, along with its correlate, moral responsibility, are being eroded through the logic of psychology which is steadily reconfiguring large swathes of familiar human conduct as pathology. The intention of the paper is not only to raise some concerns over the exponential growth of behaviour disorders, but also, and more significantly, to flag the ongoing relevance of philosophy for prying open contemporary educational problems in new and interesting ways.
Resumo:
Rather than understanding the recurrent failure of various attempts at crime control as unfortunate and undesirable aberrations, all too familiar glitches an otherwise uninterrupted teleological march to a better society, such failures are instead positioned as part of the fabric of late modernity itself. That is, society changes not according to a predetermined logic along neatly defined and clearly reasoned tracks, rather it hurtles from crisis to crisis, from failure to failure, and it is the regulation of that failure which produces new initiatives and new forms of governance. Utilising the example of the modern prison, this chapter contends that too great an emphasis upon this institution’s ‘failure’ results not only in a neglect of the many other functions that it serves in the regulation of difference, but also, and more generally, it results in an underestimation of the importance of failure in providing new impetus for social transformation.
Resumo:
One of the oldest problems in philosophy concerns the relationship between free will and moral responsibility. If we adopt the position that we lack free will, in the absolute sense—as have most philosophers who have addressed this issue—how can we truly be held accountable for what we do? This paper will contend that the most significant and interesting challenge to the long-standing status-quo on the matter comes not from philosophy, jurisprudence, or even physics, but rather from psychology. By examining this debate through the lens of contemporary behaviour disorders, such as ADHD, it will be argued that notions of free will, along with its correlate, moral responsibility, are being eroded through the logic of psychology which is steadily reconfiguring large swathes of familiar human conduct as pathology. The intention of the paper is not only to raise some concerns over the exponential growth of behaviour disorders, but also, and more significantly, to flag the ongoing relevance of philosophy for prying open contemporary educational problems in new and interesting ways.
Resumo:
This paper has two central purposes: the first is to survey some of the more important examples of fallacious argument, and the second is to examine the frequent use of these fallacies in support of the psychological construct: Attention Deficit Hyperactivity Disorder (ADHD). The paper divides twelve familiar fallacies into three different categories—material, psychological and logical—and contends that advocates of ADHD often seem to employ these fallacies to support their position. It is suggested that all researchers, whether into ADHD or otherwise, need to pay much closer attention to the construction of their arguments if they are not to make truth claims unsupported by satisfactory evidence, form or logic.
Resumo:
The proliferation of innovative schemes to address climate change at international, national and local levels signals a fundamental shift in the priority and role of the natural environment to society, organizations and individuals. This shift in shared priorities invites academics and practitioners to consider the role of institutions in shaping and constraining responses to climate change at multiple levels of organisations and society. Institutional theory provides an approach to conceptualising and addressing climate change challenges by focusing on the central logics that guide society, organizations and individuals and their material and symbolic relationship to the environment. For example, framing a response to climate change in the form of an emission trading scheme evidences a practice informed by a capitalist market logic (Friedland and Alford 1991). However, not all responses need necessarily align with a market logic. Indeed, Thornton (2004) identifies six broad societal sectors each with its own logic (markets, corporations, professions, states, families, religions). Hence, understanding the logics that underpin successful –and unsuccessful– climate change initiatives contributes to revealing how institutions shape and constrain practices, and provides valuable insights for policy makers and organizations. This paper develops models and propositions to consider the construction of, and challenges to, climate change initiatives based on institutional logics (Thornton and Ocasio 2008). We propose that the challenge of understanding and explaining how climate change initiatives are successfully adopted be examined in terms of their institutional logics, and how these logics evolve over time. To achieve this, a multi-level framework of analysis that encompasses society, organizations and individuals is necessary (Friedland and Alford 1991). However, to date most extant studies of institutional logics have tended to emphasize one level over the others (Thornton and Ocasio 2008: 104). In addition, existing studies related to climate change initiatives have largely been descriptive (e.g. Braun 2008) or prescriptive (e.g. Boiral 2006) in terms of the suitability of particular practices. This paper contributes to the literature on logics by examining multiple levels: the proliferation of the climate change agenda provides a site in which to study how institutional logics are played out across multiple, yet embedded levels within society through institutional forums in which change takes place. Secondly, the paper specifically examines how institutional logics provide society with organising principles –material practices and symbolic constructions– which enable and constrain their actions and help define their motives and identity. Based on this model, we develop a series of propositions of the conditions required for the successful introduction of climate change initiatives. The paper proceeds as follows. We present a review of literature related to institutional logics and develop a generic model of the process of the operation of institutional logics. We then consider how this is applied to key initiatives related to climate change. Finally, we develop a series of propositions which might guide insights into the successful implementation of climate change practices.
Resumo:
With service interaction modelling, it is customary to distinguish between two types of models: choreographies and orchestrations. A choreography describes interactions within a collection of services from a global perspective, where no service plays a privileged role. Instead, services interact in a peer-to-peer manner. In contrast, an orchestration describes the interactions between one particular service, the orchestrator, and a number of partner services. The main proposition of this work is an approach to bridge these two modelling viewpoints by synthesising orchestrators from choreographies. To start with, choreographies are defined using a simple behaviour description language based on communicating finite state machines. From such a model, orchestrators are initially synthesised in the form of state machines. It turns out that state machines are not suitable for orchestration modelling, because orchestrators generally need to engage in concurrent interactions. To address this issue, a technique is proposed to transform state machines into process models in the Business Process Modelling Notation (BPMN). Orchestrations represented in BPMN can then be augmented with additional business logic to achieve value-adding mediation. In addition, techniques exist for refining BPMN models into executable process definitions. The transformation from state machines to BPMN relies on Petri nets as an intermediary representation and leverages techniques from theory of regions to identify concurrency in the initial Petri net. Once concurrency has been identified, the resulting Petri net is transformed into a BPMN model. The original contributions of this work are: an algorithm to synthesise orchestrators from choreographies and a rules-based transformation from Petri nets into BPMN.
Resumo:
Privacy enhancing protocols (PEPs) are a family of protocols that allow secure exchange and management of sensitive user information. They are important in preserving users’ privacy in today’s open environment. Proof of the correctness of PEPs is necessary before they can be deployed. However, the traditional provable security approach, though well established for verifying cryptographic primitives, is not applicable to PEPs. We apply the formal method of Coloured Petri Nets (CPNs) to construct an executable specification of a representative PEP, namely the Private Information Escrow Bound to Multiple Conditions Protocol (PIEMCP). Formal semantics of the CPN specification allow us to reason about various security properties of PIEMCP using state space analysis techniques. This investigation provides us with preliminary insights for modeling and verification of PEPs in general, demonstrating the benefit of applying the CPN-based formal approach to proving the correctness of PEPs.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
The MDG deadline is fast approaching and the climate within the United Nations remains positive but skeptical. A common feeling is that a great deal of work and headway has been made, but the MDG goals will not be achieved in full by 2015. The largest problem facing the success of the MDGs is, and unless mitigated may remain, mismanaged governance. This argument is confirmed by a strong line of publications stemming from the United Nations and targeting methods (depending on a region or country context) such as improving governance via combating corruption, instituting accountability, peace and stability, as well as transparency. Furthermore, a logical assessment of the framework which MDGs operate in (i.e. international pressure and local civil socio-economic and/or political initiatives pushing governments to progress with MDGs) identifies the State's governing apparatus as the key to the success of MDGs. It is argued that a new analytic framework and grounded theory of democracy (the Element of Democracy) is needed in order to improve governance and enhance democracy. By looking beyond the confines of the MDGs and focusing on properly rectifying poor governance, the progress of MDGs can be accelerated as societies and their governments will be - at minimum - held more accountable to the success of programs in their respective countries. The paper demonstrates the logic of this argument - especially highlighting a new way of viewing democracy - and certain early practices which can accelerate MDGs in the short to medium term.
Resumo:
This article is concerned with the repercussions of societal change on transnational media. It offers a new understanding of multilingual programming strategies by examining “Radio MultiKulti” (RM), a public service radio station discontinued from 1/1/2009 by Rundfunk Berlin-Brandenburg. In its fourteen years of existence, “RM” had to implement a well-intended and politically-motivated logic of ‘multiethnic, intercultural service station’. However, as we demonstrate, such a direction, despite some achievements, has resulted in the constraints to RM’s journalistic activities and language policy, drawing criticism for the station’s economic viability. This paper proposes that multilingual media services are to be framed by the concept of practical hybridity that allows a necessary responsiveness towards an ever-changing media environment, at the moment within digital culture. Our approach draws on Mikhail Bakhtin’s and Yuri Lotman’s theoretical approaches to hybridity, as well as in-depth interviews conducted with “RM” staff from 2005 onwards, further interviews with key agents outside RM and a continuous monitoring of the public debate which culminated at the end of 2008 in the controversial decision to close the radio station. Against this background, the concluding remarks are meant to contribute to the scholarly debate on hybridization as well as to inform multilingual media policy in the 21st century.
Resumo:
Multilevel inverters provide an attractive solution for power electronics when both reduced harmonic contents and high voltages are required. In this paper, a novel predictive current control technique is proposed for a three-phase multilevel inverter, which controls the capacitors voltages and load currents with low switching losses. The advantage of this contribution is that the technique can be applied to more voltage levels without significantly changing the control circuit. The three-phase three-level inverter with a pure inductive load has been implemented to track reference currents using analogue circuits and programmable logic device.
Resumo:
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.
Resumo:
SCAPE is an interactive simulation that allows teachers and students to experiment with sustainable urban design. The project is based on the Kelvin Grove Urban Village, Brisbane. Groups of students role play as political, retail, elderly, student, council and builder characters to negotiate on game decisions around land use, density, housing types and transport in order to design a sustainable urban community. As they do so, the 3D simulation reacts in real time to illustrate what the village would look like as well as provide statistical information about the community they are creating. SCAPE brings together education, urban professional and technology expertise, helping it achieve educational outcomes, reflect real-world scenarios and include sophisticated logic and decision making processes and effects.---------- The research methodology was primarily practice led underpinned by action research methods resulting in innovative approaches and techniques in adapting digital games and simulation technologies to create dynamic and engaging experiences in pedagogical contexts. It also illustrates the possibilities for urban designers to engage a variety of communities in the processes, complexities and possibilities of urban development and sustainability.
Resumo:
We propose to design a Custom Learning System that responds to the unique needs and potentials of individual students, regardless of their location, abilities, attitudes, and circumstances. This project is intentionally provocative and future-looking but it is not unrealistic or unfeasible. We propose that by combining complex learning databases with a learner’s personal data, we could provide all students with a personal, customizable, and flexible education. This paper presents the initial research undertaken for this project of which the main challenges were to broadly map the complex web of data available, to identify what logic models are required to make the data meaningful for learning, and to translate this knowledge into simple and easy-to-use interfaces. The ultimate outcome of this research will be a series of candidate user interfaces and a broad system logic model for a new smart system for personalized learning. This project is student-centered, not techno-centric, aiming to deliver innovative solutions for learners and schools. It is deliberately future-looking, allowing us to ask questions that take us beyond the limitations of today to motivate new demands on technology.